- 29 1月, 2008 4 次提交
-
-
由 Ilpo Järvinen 提交于
It is going to replace the sack fastpath hint quite soon... :-) Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Many assumptions that are true when no reordering or other strange events happen are not a part of the RFC3517. FACK implementation is based on such assumptions. Previously (before the rewrite) the non-FACK SACK was basically doing fast rexmit and then it times out all skbs when first cumulative ACK arrives, which cannot really be called SACK based recovery :-). RFC3517 SACK disables these things: - Per SKB timeouts & head timeout entry to recovery - Marking at least one skb while in recovery (RFC3517 does this only for the fast retransmission but not for the other skbs when cumulative ACKs arrive in the recovery) - Sacktag's loss detection flavors B and C (see comment before tcp_sacktag_write_queue) This does not implement the "last resort" rule 3 of NextSeg, which allows retransmissions also when not enough SACK blocks have yet arrived above a segment for IsLost to return true [RFC3517]. The implementation differs from RFC3517 in these points: - Rate-halving is used instead of FlightSize / 2 - Instead of using dupACKs to trigger the recovery, the number of SACK blocks is used as FACK does with SACK blocks+holes (which provides more accurate number). It seems that the difference can affect negatively only if the receiver does not generate SACK blocks at all even though it claimed to be SACK-capable. - Dupthresh is not a constant one. Dynamical adjustments include both holes and sacked segments (equal to what FACK has) due to complexity involved in determining the number sacked blocks between highest_sack and the reordered segment. Thus it's will be an over-estimate. Implementation note: tcp_clean_rtx_queue doesn't need a lost_cnt tweak because head skb at that point cannot be SACKED_ACKED (nor would such situation last for long enough to cause problems). Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
This implements more accurately what is stated in sacktag's overall comment: "Both of these heuristics are not used in Loss state, when we cannot account for retransmits accurately." When CA_Loss state is entered, the state changer ensures that undo_marker is only set if no TCPCB_RETRANS skbs were found, thus having non-zero undo_marker in CA_Loss basically tells that the R-bits still accurately reflect the current state of TCP. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
All intermediate conditions include it already, make them simpler as well. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 12月, 2007 1 次提交
-
-
由 Gavin McCullagh 提交于
When a delayed ACK representing two packets arrives, there are two RTT samples available, one for each packet. The first (in order of seq number) will be artificially long due to the delay waiting for the second packet, the second will trigger the ACK and so will not itself be delayed. According to rfc1323, the SRTT used for RTO calculation should use the first rtt, so receivers echo the timestamp from the first packet in the delayed ack. For congestion control however, it seems measuring delayed ack delay is not desirable as it varies independently of congestion. The patch below causes seq_rtt and last_ackt to be updated with any available later packet rtts which should have less (and hopefully zero) delack delay. The rtt value then gets passed to ca_ops->pkts_acked(). Where TCP_CONG_RTT_STAMP was set, effort was made to supress RTTs from within a TSO chunk (!fully_acked), using only the final ACK (which includes any TSO delay) to generate RTTs. This patch removes these checks so RTTs are passed for each ACK to ca_ops->pkts_acked(). For non-delay based congestion control (cubic, h-tcp), rtt is sometimes used for rtt-scaling. In shortening the RTT, this may make them a little less aggressive. Delay-based schemes (eg vegas, veno, illinois) should get a cleaner, more accurate congestion signal, particularly for small cwnds. The congestion control module can potentially also filter out bad RTTs due to the delayed ack alarm by looking at the associated cnt which (where delayed acking is in use) should probably be 1 if the alarm went off or greater if the ACK was triggered by a packet. Signed-off-by: NGavin McCullagh <gavin.mccullagh@nuim.ie> Acked-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 12月, 2007 1 次提交
-
-
由 Satoru SATOH 提交于
tcp_input_metrics() refers to the built-time constant TCP_RTO_MIN regardless of configured minimum RTO with iproute2. Signed-off-by: NSatoru SATOH <satoru.satoh@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 12月, 2007 2 次提交
-
-
由 Ilpo Jrvinen 提交于
The previous location is after sacktag processing, which affects counters tcp_packets_in_flight depends on. This may manifest as wrong behavior if new SACK blocks are present and all is clear for call to tcp_cong_avoid, which in the case of tcp_reno_cong_avoid bails out early because it thinks that TCP is not limited by cwnd. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Jrvinen 提交于
Though there's little need for everything that tcp_may_send_now does (actually, even the state had to be adjusted to pass some checks FRTO does not want to occur), it's more robust to let it make the decision if sending is allowed. State adjustments needed: - Make sure snd_cwnd limit is not hit in there - Disable nagle (if necessary) through the frto_counter == 2 The result of check for frto_counter in argument to call for tcp_enter_frto_loss can just be open coded, therefore there isn't need to store the previous frto_counter past tcp_may_send_now. In addition, returns can then be combined. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 11月, 2007 2 次提交
-
-
由 Ilpo Jrvinen 提交于
I broke this in commit 3de96471: [TCP]: Wrap-safed reordering detection FRTO check tcp_process_frto should always see a valid frto_highmark. An invalid frto_highmark (zero) is very likely what ultimately caused a seqno compare in tcp_frto_enter_loss to do the wrong leading to the LOST-bit leak. Having LOST-bits integry ensured like done after commit 23aeeec3: [TCP] FRTO: Plug potential LOST-bit leak won't hurt. It may still be useful in some other, possibly legimate, scenario. Reported by Chazarain Guillaume <guichaz@yahoo.fr>. Signed-off-by: NIlpo Jrvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Jrvinen 提交于
NULL ptr can be returned from tcp_write_queue_head to cached_skb and then assigned to skb if packets_out was zero. Without this, system is vulnerable to a carefully crafted ACKs which obviously is remotely triggerable. Besides, there's very little that needs to be done in sacktag if there weren't any packets outstanding, just skipping the rest doesn't hurt. Signed-off-by: NIlpo Jrvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 11月, 2007 2 次提交
-
-
由 Ilpo Jrvinen 提交于
It might be possible that, in some extreme scenario that I just cannot now construct in my mind, end_seq <= frto_highmark check does not match causing the lost_out and LOST bits become out-of-sync due to clearing and recounting in the loop. This may fix LOST-bit leak reported by Chazarain Guillaume <guichaz@yahoo.fr>. Signed-off-by: NIlpo Jrvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Jrvinen 提交于
Otherwise TCP might violate packet ordering principles that FRTO is based on. If conventional recovery path is chosen, this won't be significant at all. In practice, any small enough value will be sufficient to provide proper operation for FRTO, yet other users of snd_cwnd might benefit from a "close enough" value. FRTO's formula is now equal to what tcp_enter_cwr() uses. FRTO used to check application limitedness a bit differently but I changed that in commit 575ee714 and as a result checking for application limitedness became completely non-existing. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 11月, 2007 4 次提交
-
-
由 Ilpo Jrvinen 提交于
In case we run out of mem when fragmenting, the clearing of FLAG_ONLY_ORIG_SACKED might get missed which then feeds FRTO with false information. Move clearing outside skb processing loop so that it will get executed even if the skb loop terminates prematurely due to out-of-mem. Besides, now the core of the loop truly deals with a single skb only, which also enables creation a more self-contained of tcp_sacktag_one later on. In addition, small reorganization of if branches was made. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Jrvinen 提交于
Signed-off-by: NIlpo Jrvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Jrvinen 提交于
Fixes subtle bug like the one with fastpath_cnt_hint happening due to the way the GSO and hints interact. Because hints are not reset when just a GSOed skb is partially ACKed, there's no guarantee that the relevant part of the write queue is going to be processed in sacktag at all (skbs below snd_una) because fastpath hint can fast forward the entrypoint. This was also on the way of future reductions in sacktag's skb processing. Also future cleanups in sacktag can be made after this (in 2.6.25). This may make reordering update in tcp_try_undo_partial redundant but I'm not too sure so I left it there. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Jrvinen 提交于
Reordering detection fails to take account that the reordered skb may have pcount larger than 1. In such case the lowest of them had the largest reordering, the old formula used the highest of them which is pcount - 1 packets less reordered. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 11月, 2007 2 次提交
-
-
由 Ilpo Jrvinen 提交于
Similar to commit 3eec0047, point of this is to avoid skipping R-bit skbs. Signed-off-by: NIlpo Jrvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Jrvinen 提交于
DSACK inside another SACK block were missed if start_seq of DSACK was larger than SACK block's because sorting prioritizes full processing of the SACK block before DSACK. After SACK block sorting situation is like this: SSSSSSSSS D SSSSSS SSSSSSS Because write_queue is walked in-order, when the first SACK block has been processed, TCP is already past the skb for which the DSACK arrived and we haven't taught it to backtrack (nor should we), so TCP just continues processing by going to the next SACK block after the DSACK (if any). Whenever such DSACK is present, do an embedded checking during the previous SACK block. If the DSACK is below snd_una, there won't be overlapping SACK block, and thus no problem in that case. Also if start_seq of the DSACK is equal to the actual block, it will be processed first. Tested this by using netem to duplicate 15% of packets, and by printing SACK block when found_dup_sack is true and the selected skb in the dup_sack = 1 branch (if taken): SACK block 0: 4344-5792 (relative to snd_una 2019137317) SACK block 1: 4344-5792 (relative to snd_una 2019137317) equal start seqnos => next_dup = 0, dup_sack = 1 won't occur... SACK block 0: 5792-7240 (relative to snd_una 2019214061) SACK block 1: 2896-7240 (relative to snd_una 2019214061) DSACK skb match 5792-7240 (relative to snd_una) ...and next_dup = 1 case (after the not shown start_seq sort), went to dup_sack = 1 branch. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 10月, 2007 3 次提交
-
-
由 Ryousei Takano 提交于
In the current net-2.6 kernel, handling FLAG_DSACKING_ACK is broken. The flag is cleared to 1 just after FLAG_DSACKING_ACK is set. if (found_dup_sack) flag |= FLAG_DSACKING_ACK; : flag = 1; To fix it, this patch introduces a part of the tcp_sacktag_state patch: http://marc.info/?l=linux-netdev&m=119210560431519&w=2Signed-off-by: NRyousei Takano <takano-ryousei@aist.go.jp> Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Adrian Bunk 提交于
tcp_match_skb_to_sack() can become static. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ryousei Takano 提交于
Fix inconsistency of terms: 1) D-SACK 2) F-RTO Signed-off-by: NRyousei Takano <takano-ryousei@aist.go.jp> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 10月, 2007 1 次提交
-
-
由 Chuck Lever 提交于
In some places, the result of skb_headroom() is compared to an unsigned integer, and in others, the result is compared to a signed integer. Make the comparisons consistent and correct. Signed-off-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 10月, 2007 1 次提交
-
-
由 Ilpo Järvinen 提交于
Both high-sack detection and new lowest seq variables have unnecessary zero special case which are now removed by setting safe initial seqnos. This also fixes problem which caused zero received_upto being passed to tcp_mark_lost_retrans which confused after relations within the marker loop causing incorrect TCPCB_SACKED_RETRANS clearing. The problem was noticed because of a performance report from TAKANO Ryousei <takano@axe-inc.co.jp>. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Acked-by: NRyousei Takano <takano-ryousei@aist.go.jp> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 10月, 2007 1 次提交
-
-
由 Ilpo Järvinen 提交于
Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 10月, 2007 7 次提交
-
-
由 Ilpo Järvinen 提交于
This addition of lost_retrans_low to tcp_sock might be unnecessary, it's not clear how often lost_retrans worker is executed when there wasn't work to do. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Detection implemented with lost_retrans must work also when fastpath is taken, yet most of the queue is skipped including (very likely) those retransmitted skb's we're interested in. This problem appeared when the hints got added, which removed a need to always walk over the whole write queue head. Therefore decicion for the lost_retrans worker loop entry must be separated from the sacktag processing more than it was necessary before. It turns out to be problematic to optimize the worker loop very heavily because ack_seqs of skb may have a number of discontinuity points. Maybe similar approach as currently is implemented could be attempted but that's becoming more and more complex because the trend is towards less skb walking in sacktag marker. Trying a simple work until all rexmitted skbs heve been processed approach. Maybe after(highest_sack_end_seq, tp->high_seq) checking is not sufficiently accurate and causes entry too often in no-work-to-do cases. Since that's not known, I've separated solution to that from this patch. Noticed because of report against a related problem from TAKANO Ryousei <takano@axe-inc.co.jp>. He also provided a patch to that part of the problem. This patch includes solution to it (though this patch has to use somewhat different placement). TAKANO's description and patch is available here: http://marc.info/?l=linux-netdev&m=119149311913288&w=2 ...In short, TAKANO's problem is that end_seq the loop is using not necessarily the largest SACK block's end_seq because the current ACK may still have higher SACK blocks which are later by the loop. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Both sacked_out and fackets_out are directly known from how parameter. Since fackets_out is accurate, there's no need for recounting (sacked_out was previously unnecessarily counted in the loop anyway). Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
This is necessary for upcoming DSACK bugfix. Reduces sacktag length which is not very sad thing at all... :-) Notice that there's a need to handle out-of-mem at caller's place. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
It's on the way for future cutting of that function. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
This condition (plain R) can arise at least in recovery that is triggered after tcp_undo_loss. There isn't any reason why they should not be marked as lost, not marking makes in_flight estimator to return too large values. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
I was reading tcp_enter_loss while looking for Cedric's bug and noticed bytes_acked adjustment is missing from FRTO side. Since bytes_acked will only be used in tcp_cong_avoid, I think it's safe to assume RTO would be spurious. During FRTO cwnd will be not controlled by tcp_cong_avoid and if FRTO calls for conventional recovery, cwnd is adjusted and the result of wrong assumption is cleared from bytes_acked. If RTO was in fact spurious, we did normal ABC already and can continue without any additional adjustments. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 10月, 2007 9 次提交
-
-
由 Ilpo Järvinen 提交于
Follows own function for each task principle, this is really somewhat separate task being done in sacktag. Also reduces indentation. In addition, added ack_seq local var to break some long lines & fixed coding style things. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Stephen Hemminger 提交于
Fix a bunch of sparse warnings. Mostly about 0 used as NULL pointer, and shadowed variable declarations. One notable case was that hash size should have been unsigned. Signed-off-by: NStephen Hemminger <shemminger@linux-foundation.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
This should no longer be necessary because fackets_out is accurate. It indicates bugs elsewhere, thus report it. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
In case somebody has a suggestion about a better place for this check, which must guarantee execution "early enough" (i.e, before the wrap can occur), I'm very open to them. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Just came across what RFC2018 states about generation of valid SACK blocks in case of reneging. Alter comment a bit to point out clearly. IMHO, there isn't any reason to change code because the validation is there for a purpose (counters will inform user about decision TCP made if this case ever surfaces). Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
In case of ACK reordering, the SACK block might be valid in it's time but is already obsoleted since we've received another kind of confirmation about arrival of the segments through snd_una advancement of an earlier packet. I didn't bother to build distinguishing of valid and invalid SACK blocks but simply made reordered SACK blocks that are too old always not counted regardless of their "real" validity which could be determined by using the ack field of the reordered packet (won't be significant IMHO). DSACKs can very well be considered useful even in this situation, so won't do any of this for them. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
There's no reason to clear the sacktag skb hint when small part of the rexmit queue changes. Account changes (if any) instead when fragmenting/collapsing. RTO/FRTO do not touch SACKED_ACKED bits so no need to discard SACK tag hint at all. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-