- 08 10月, 2008 1 次提交
-
-
由 Ilpo Järvinen 提交于
It all started from me noticing that this urgent check in tcp_clean_rtx_queue is unnecessarily inside the loop. Then I took a longer look to it and found out that the users of urg_mode can trivially do without, well almost, there was one gotcha. Bonus: those funny people who use urg with >= 2^31 write_seq - snd_una could now rejoice too (that's the only purpose for the between being there, otherwise a simple compare would have done the thing). Not that I assume that the rest of the tcp code happily lives with such mind-boggling numbers :-). Alas, it turned out to be impossible to set wmem to such numbers anyway, yes I really tried a big sendfile after setting some wmem but nothing happened :-). ...Tcp_wmem is int and so is sk_sndbuf... So I hacked a bit variable to long and found out that it seems to work... :-) Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 9月, 2008 2 次提交
-
-
由 Ilpo Järvinen 提交于
Both loops are quite similar, so they can be combined with little effort. As a result, forward_skb_hint becomes obsolete as well. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Main benefit in this is that we can then freely point the retransmit_skb_hint to anywhere we want to because there's no longer need to know what would be the count changes involve, and since this is really used only as a terminator, unnecessary work is one time walk at most, and if some retransmissions are necessary after that point later on, the walk is not full waste of time anyway. Since retransmit_high must be kept valid, all lost markers must ensure that. Now I also have learned how those "holes" in the rexmittable skbs can appear, mtu probe does them. So I removed the misleading comment as well. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 7月, 2008 1 次提交
-
-
由 Adam Langley 提交于
Remove redundant checks when setting eff_sacks and make the number of SACKs a compile time constant. Now that the options code knows how many SACK blocks can fit in the header, we don't need to have the SACK code guessing at it. Signed-off-by: NAdam Langley <agl@imperialviolet.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 6月, 2008 1 次提交
-
-
由 David S. Miller 提交于
This reverts two changesets, ec3c0982 ("[TCP]: TCP_DEFER_ACCEPT updates - process as established") and the follow-on bug fix 9ae27e0a ("tcp: Fix slab corruption with ipv6 and tcp6fuzz"). This change causes several problems, first reported by Ingo Molnar as a distcc-over-loopback regression where connections were getting stuck. Ilpo Järvinen first spotted the locking problems. The new function added by this code, tcp_defer_accept_check(), only has the child socket locked, yet it is modifying state of the parent listening socket. Fixing that is non-trivial at best, because we can't simply just grab the parent listening socket lock at this point, because it would create an ABBA deadlock. The normal ordering is parent listening socket --> child socket, but this code path would require the reverse lock ordering. Next is a problem noticed by Vitaliy Gusev, he noted: ---------------------------------------- >--- a/net/ipv4/tcp_timer.c >+++ b/net/ipv4/tcp_timer.c >@@ -481,6 +481,11 @@ static void tcp_keepalive_timer (unsigned long data) > goto death; > } > >+ if (tp->defer_tcp_accept.request && sk->sk_state == TCP_ESTABLISHED) { >+ tcp_send_active_reset(sk, GFP_ATOMIC); >+ goto death; Here socket sk is not attached to listening socket's request queue. tcp_done() will not call inet_csk_destroy_sock() (and tcp_v4_destroy_sock() which should release this sk) as socket is not DEAD. Therefore socket sk will be lost for freeing. ---------------------------------------- Finally, Alexey Kuznetsov argues that there might not even be any real value or advantage to these new semantics even if we fix all of the bugs: ---------------------------------------- Hiding from accept() sockets with only out-of-order data only is the only thing which is impossible with old approach. Is this really so valuable? My opinion: no, this is nothing but a new loophole to consume memory without control. ---------------------------------------- So revert this thing for now. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 5月, 2008 1 次提交
-
-
由 Ilpo Järvinen 提交于
I tried to group recovery related fields nearby (non-CA_Open related variables, to be more accurate) so that one to three cachelines would not be necessary in CA_Open. These are now contiguously deployed: struct sk_buff_head out_of_order_queue; /* 1968 80 */ /* --- cacheline 32 boundary (2048 bytes) --- */ struct tcp_sack_block duplicate_sack[1]; /* 2048 8 */ struct tcp_sack_block selective_acks[4]; /* 2056 32 */ struct tcp_sack_block recv_sack_cache[4]; /* 2088 32 */ /* --- cacheline 33 boundary (2112 bytes) was 8 bytes ago --- */ struct sk_buff * highest_sack; /* 2120 8 */ int lost_cnt_hint; /* 2128 4 */ int retransmit_cnt_hint; /* 2132 4 */ u32 lost_retrans_low; /* 2136 4 */ u8 reordering; /* 2140 1 */ u8 keepalive_probes; /* 2141 1 */ /* XXX 2 bytes hole, try to pack */ u32 prior_ssthresh; /* 2144 4 */ u32 high_seq; /* 2148 4 */ u32 retrans_stamp; /* 2152 4 */ u32 undo_marker; /* 2156 4 */ int undo_retrans; /* 2160 4 */ u32 total_retrans; /* 2164 4 */ ...and they're then followed by URG slowpath & keepalive related variables. Head of the out_of_order_queue always needed for empty checks, if that's empty (and TCP is in CA_Open), following ~200 bytes (in 64-bit) shouldn't be necessary for anything. If only OFO queue exists but TCP is in CA_Open, selective_acks (and possibly duplicate_sack) are necessary besides the out_of_order_queue but the rest of the block again shouldn't be (ie., the other direction had losses). As the cacheline boundaries depend on many factors in the preceeding stuff, trying to align considering them doesn't make too much sense. Commented one ordering hazard. There are number of low utilized u8/16s that could be combined get 2 bytes less in total so that the hole could be made to vanish (includes at least ecn_flags, urg_data, urg_mode, frto_counter, nonagle). Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Acked-by: NEric Dumazet <dada1@cosmosbay.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 5月, 2008 1 次提交
-
-
由 Ilpo Järvinen 提交于
If previous window was above representable values of u16, strange things will happen if undo with the truncated value is called for. Alternatively, this could be fixed by some max trickery but that would limit undoing high-speed undos. Adds 16-bit hole but there isn't anything to fill it with. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 3月, 2008 1 次提交
-
-
由 Patrick McManus 提交于
Change TCP_DEFER_ACCEPT implementation so that it transitions a connection to ESTABLISHED after handshake is complete instead of leaving it in SYN-RECV until some data arrvies. Place connection in accept queue when first data packet arrives from slow path. Benefits: - established connection is now reset if it never makes it to the accept queue - diagnostic state of established matches with the packet traces showing completed handshake - TCP_DEFER_ACCEPT timeouts are expressed in seconds and can now be enforced with reasonable accuracy instead of rounding up to next exponential back-off of syn-ack retry. Signed-off-by: NPatrick McManus <mcmanus@ducksong.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 1月, 2008 3 次提交
-
-
由 Ilpo Järvinen 提交于
Key points of this patch are: - In case new SACK information is advance only type, no skb processing below previously discovered highest point is done - Optimize cases below highest point too since there's no need to always go up to highest point (which is very likely still present in that SACK), this is not entirely true though because I'm dropping the fastpath_skb_hint which could previously optimize those cases even better. Whether that's significant, I'm not too sure. Currently it will provide skipping by walking. Combined with RB-tree, all skipping would become fast too regardless of window size (can be done incrementally later). Previously a number of cases in TCP SACK processing fails to take advantage of costly stored information in sack_recv_cache, most importantly, expected events such as cumulative ACK and new hole ACKs. Processing on such ACKs result in rather long walks building up latencies (which easily gets nasty when window is huge). Those latencies are often completely unnecessary compared with the amount of _new_ information received, usually for cumulative ACK there's no new information at all, yet TCP walks whole queue unnecessary potentially taking a number of costly cache misses on the way, etc.! Since the inclusion of highest_sack, there's a lot information that is very likely redundant (SACK fastpath hint stuff, fackets_out, highest_sack), though there's no ultimate guarantee that they'll remain the same whole the time (in all unearthly scenarios). Take advantage of this knowledge here and drop fastpath hint and use direct access to highest SACKed skb as a replacement. Effectively "special cased" fastpath is dropped. This change adds some complexity to introduce better coveraged "fastpath", though the added complexity should make TCP behave more cache friendly. The current ACK's SACK blocks are compared against each cached block individially and only ranges that are new are then scanned by the high constant walk. For other parts of write queue, even when in previously known part of the SACK blocks, a faster skip function is used (if necessary at all). In addition, whenever possible, TCP fast-forwards to highest_sack skb that was made available by an earlier patch. In typical case, no other things but this fast-forward and mandatory markings after that occur making the access pattern quite similar to the former fastpath "special case". DSACKs are special case that must always be walked. The local to recv_sack_cache copying could be more intelligent w.r.t DSACKs which are likely to be there only once but that is left to a separate patch. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
It is going to replace the sack fastpath hint quite soon... :-) Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 10月, 2007 1 次提交
-
-
由 Ilpo Järvinen 提交于
Very little point of having 32-bit snd_cnwd if this is not 32-bit as well, as a number of snd_cwnd incrementation formulas assume that snd_cwnd_cnt can be at least as large as snd_cwnd. Whether 32-bit is useful was discussed when e0ef57cc was made: http://marc.info/?l=linux-netdev&m=117218144409825&w=2Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 10月, 2007 1 次提交
-
-
由 Ilpo Järvinen 提交于
This addition of lost_retrans_low to tcp_sock might be unnecessary, it's not clear how often lost_retrans worker is executed when there wasn't work to do. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 10月, 2007 5 次提交
-
-
由 Ilpo Järvinen 提交于
Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
This stale info came from the original idea, which proved to be unnecessarily complex, sacked_out > 0 is easy to do and that when it's going to be needed anyway (it _can_ be valid also when sacked_out == 0 but there's not going to be a guarantee about it for now). Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
It is easily calculable when needed and user are not that many after all. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
In addition, added a reference about the purpose of the loop. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
It is guaranteed to be valid only when !tp->sacked_out. In most cases this seqno is available in the last ACK but there is no guarantee for that. The new fast recovery loss marking algorithm needs this as entry point. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 4月, 2007 5 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
For the places where we need a pointer to the transport header, it is still legal to touch skb->h.raw directly if just adding to, subtracting from or setting it to another layer header. Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to avoid the longer, open coded equivalent. Ditched a no-op in bnx2 in the process. I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()... Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
I noticed in oprofile study a cache miss in tcp_rcv_established() to read copied_seq. ffffffff80400a80 <tcp_rcv_established>: /* tcp_rcv_established total: 4034293 2.0400 */ 55493 0.0281 :ffffffff80400bc9: mov 0x4c8(%r12),%eax copied_seq 543103 0.2746 :ffffffff80400bd1: cmp 0x3e0(%r12),%eax rcv_nxt if (tp->copied_seq == tp->rcv_nxt && len - tcp_header_len <= tp->ucopy.len) { In this function, the cache line 0x4c0 -> 0x500 is used only for this reading 'copied_seq' field. rcv_wup and copied_seq should be next to rcv_nxt field, to lower number of active cache lines in hot paths. (tcp_rcv_established(), tcp_poll(), ...) As you suggested, I changed tcp_create_openreq_child() so that these fields are changed together, to avoid adding a new store buffer stall. Patch is 64bit friendly (no new hole because of alignment constraints) Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 2月, 2007 1 次提交
-
-
由 Baruch Even 提交于
Move DSACK code outside the SACK fast-path checking code. If the DSACK determined that the information was too old we stayed with a partial cache copied. Most likely this matters very little since the next packet will not be DSACK and we will find it in the cache. but it's still not good form and there is little reason to couple the two checks. Since the SACK receive cache doesn't need the data to be in host order we also remove the ntohl in the checking loop. Signed-off-by: NBaruch Even <baruch@ev-en.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 12月, 2006 4 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
As this struct is not userland visible at all. Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com>
-
由 Arnaldo Carvalho de Melo 提交于
With this we eliminate the last hole in struct tcp_sock. End result: [acme@newtoy net-2.6.20]$ codiff -sV /tmp/tcp.o.before net/ipv4/tcp.o /pub/scm/linux/kernel/git/acme/net-2.6.20/net/ipv4/tcp.c: struct tcp_sock | -4 tcp_header_len; from: int /* 1000(0) 4(0) */ to: u16 /* 1000(0) 2(0) */ 1 struct changed [acme@newtoy net-2.6.20]$ Now sizeof(tcp_sock) is just... [acme@newtoy net-2.6.20]$ pahole --sizes ../OUTPUT/qemu/net-2.6.20/net/ipv4/tcp.o | grep -w tcp_sock struct tcp_sock: 1500 0 1500 bytes ;-) Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YOSHIFUJI Hideaki 提交于
Based on implementation by Rick Payne. Signed-off-by: NYOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 10月, 2006 1 次提交
-
-
由 John Heffner 提交于
This patch limits the amount of time you will defer sending a TSO segment to less than two clock ticks, or the time between two acks, whichever is longer. On slow links, deferring causes significant bursts. See attached plots, which show RTT through a 1 Mbps link with a 100 ms RTT and ~100 ms queue for (a) non-TSO, (b) currnet TSO, and (c) patched TSO. This burstiness causes significant jitter, tends to overflow queues early (bad for short queues), and makes delay-based congestion control more difficult. Deferring by a couple clock ticks I believe will have a relatively small impact on performance. Signed-off-by: NJohn Heffner <jheffner@psc.edu> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 9月, 2006 3 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Al Viro 提交于
Some of the instances of tcp_sack_block are host-endian, some - net-endian. Define struct tcp_sack_block_wire identical to struct tcp_sack_block with u32 replaced with __be32; annotate uses of tcp_sack_block replacing net-endian ones with tcp_sack_block_wire. Change is obviously safe since for cc(1) __be32 is typedefed to u32. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 6月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
The new <linux/dmaengine.h> header shouldn't be included from the !__KERNEL__ portion of tcp.h Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 6月, 2006 1 次提交
-
-
由 Chris Leech 提交于
Adds an async_wait_queue and some additional fields to tcp_sock, and a dma_cookie_t to sk_buff. Signed-off-by: NChris Leech <christopher.leech@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 4月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 21 3月, 2006 1 次提交
-
-
由 John Heffner 提交于
This moves some TCP-specific MTU probing state out of inet_connection_sock back to tcp_sock. Signed-off-by: NJohn Heffner <jheffner@psc.edu> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 1月, 2006 3 次提交
-
-
由 Arnaldo Carvalho de Melo 提交于
As DCCP needs to be called in the same spots. Now we have a member in inet_sock (is_icsk), set at sock creation time from struct inet_protosw->flags (if INET_PROTOSW_ICSK is set, like for TCP and DCCP) to see if a struct sock instance is a inet_connection_sock for places like the ones in ip_sockglue.c (v4 and v6) where we previously were looking if sk_type was SOCK_STREAM, that is insufficient because we now use the same code for DCCP, that has sk_type SOCK_DCCP. Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
Upcoming patches will make, for instance, ip_sockglue.c need just this enum and not all of tcp.h. Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
And move it to struct inet_connection_sock. DCCP will use it in the upcoming changesets. Signed-off-by: NArnaldo Carvalho de Melo <acme@mandriva.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 11月, 2005 1 次提交
-
-
由 Stephen Hemminger 提交于
Use "hints" to speed up the SACK processing. Various forms of this have been used by TCP developers (Web100, STCP, BIC) to avoid the 2x linear search of outstanding segments. Signed-off-by: NStephen Hemminger <shemminger@osdl.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-