- 15 12月, 2012 4 次提交
-
-
由 Christoph Paasch 提交于
If in either of the above functions inet_csk_route_child_sock() or __inet_inherit_port() fails, the newsk will not be freed: unreferenced object 0xffff88022e8a92c0 (size 1592): comm "softirq", pid 0, jiffies 4294946244 (age 726.160s) hex dump (first 32 bytes): 0a 01 01 01 0a 01 01 02 00 00 00 00 a7 cc 16 00 ................ 02 00 03 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff8153d190>] kmemleak_alloc+0x21/0x3e [<ffffffff810ab3e7>] kmem_cache_alloc+0xb5/0xc5 [<ffffffff8149b65b>] sk_prot_alloc.isra.53+0x2b/0xcd [<ffffffff8149b784>] sk_clone_lock+0x16/0x21e [<ffffffff814d711a>] inet_csk_clone_lock+0x10/0x7b [<ffffffff814ebbc3>] tcp_create_openreq_child+0x21/0x481 [<ffffffff814e8fa5>] tcp_v4_syn_recv_sock+0x3a/0x23b [<ffffffff814ec5ba>] tcp_check_req+0x29f/0x416 [<ffffffff814e8e10>] tcp_v4_do_rcv+0x161/0x2bc [<ffffffff814eb917>] tcp_v4_rcv+0x6c9/0x701 [<ffffffff814cea9f>] ip_local_deliver_finish+0x70/0xc4 [<ffffffff814cec20>] ip_local_deliver+0x4e/0x7f [<ffffffff814ce9f8>] ip_rcv_finish+0x1fc/0x233 [<ffffffff814cee68>] ip_rcv+0x217/0x267 [<ffffffff814a7bbe>] __netif_receive_skb+0x49e/0x553 [<ffffffff814a7cc3>] netif_receive_skb+0x50/0x82 This happens, because sk_clone_lock initializes sk_refcnt to 2, and thus a single sock_put() is not enough to free the memory. Additionally, things like xfrm, memcg, cookie_values,... may have been initialized. We have to free them properly. This is fixed by forcing a call to tcp_done(), ending up in inet_csk_destroy_sock, doing the final sock_put(). tcp_done() is necessary, because it ends up doing all the cleanup on xfrm, memcg, cookie_values, xfrm,... Before calling tcp_done, we have to set the socket to SOCK_DEAD, to force it entering inet_csk_destroy_sock. To avoid the warning in inet_csk_destroy_sock, inet_num has to be set to 0. As inet_csk_destroy_sock does a dec on orphan_count, we first have to increase it. Calling tcp_done() allows us to remove the calls to tcp_clear_xmit_timer() and tcp_cleanup_congestion_control(). A similar approach is taken for dccp by calling dccp_done(). This is in the kernel since 093d2823 (tproxy: fix hash locking issue when using port redirection in __inet_inherit_port()), thus since version >= 2.6.37. Signed-off-by: NChristoph Paasch <christoph.paasch@uclouvain.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Duan Jiong 提交于
In function ndisc_redirect_rcv(), the skb->data points to the transport header, but function icmpv6_notify() need the skb->data points to the inner IP packet. So before using icmpv6_notify() to propagate redirect, change skb->data to point the inner IP packet that triggered the sending of the Redirect, and introduce struct rd_msg to make it easy. Signed-off-by: NDuan Jiong <djduanjiong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Konstantin Khlebnikov 提交于
mutex_destroy() must be called before wpan_phy_free(), because it puts the last reference and frees memory. Catched as overwritten poison in kmalloc-2048. Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org> Cc: Alexander Smirnov <alex.bluesman.smirnov@gmail.com> Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: linux-zigbee-devel@lists.sourceforge.net Cc: netdev@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ang Way Chuang 提交于
As suggested by Stephen Hemminger, this remove the temporary variable introduced in commit eca2a43b ("bridge: fix icmpv6 endian bug and other sparse warnings") Signed-off-by: NAng Way Chuang <wcang@sfc.wide.ad.jp> Acked-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 12月, 2012 3 次提交
-
-
由 stephen hemminger 提交于
Fix the warnings reported by sparse on recent bridge multicast changes. Mostly just rcu annotation issues but in this case sparse found a real bug! The ICMPv6 mld2 query mrc values is in network byte order. Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YOSHIFUJI Hideaki / 吉藤英明 提交于
If a natural number n exists where 2 + data_len <= 8n < 2 + data_len + pad, post padding is not initialized correctly. (Un)fortunately, the only type that requires pad is Infiniband, whose pad is 2 and data_len is 20, and this logical error has not become obvious, but it is better to fix. Note that ndisc_opt_addr_space() handles the situation described above correctly. Signed-off-by: NYOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dave Jones 提交于
This is easily triggerable when fuzz-testing as an unprivileged user. We could rate-limit it, but given we don't print similar messages for other protocols, I just removed it. Signed-off-by: NDave Jones <davej@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 12月, 2012 3 次提交
-
-
由 Cong Wang 提交于
This patch implents adding/deleting mdb entries via netlink. Currently all entries are temp, we probably need a flag to distinguish permanent entries too. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Graf <tgraf@suug.ch> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Cong Wang 提交于
As Stephen mentioned, we need to monitor the mdb changes in user-space, so add notifications via netlink too. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Graf <tgraf@suug.ch> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YOSHIFUJI Hideaki 提交于
These symbols were exported for bonding device by commit 305d552a ("bonding: send IPv6 neighbor advertisement on failover"). It bacame obsolete by commit 7c899432 ("bonding, ipv4, ipv6, vlan: Handle NETDEV_BONDING_FAILOVER like NETDEV_NOTIFY_PEERS") and removed by commit 4f5762ec ("bonding: Remove obsolete source file 'bond_ipv6.c'"). Signed-off-by: NYOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 12月, 2012 5 次提交
-
-
由 Eric Dumazet 提交于
With BQL being deployed, we can more likely have following behavior : We dequeue a packet from qdisc in dequeue_skb(), then we realize target tx queue is in XOFF state in sch_direct_xmit(), and we have to hold the skb into gso_skb for later. This shows in stats (tc -s qdisc dev eth0) as requeues. Problem of these requeues is that high priority packets can not be dequeued as long as this (possibly low prio and big TSO packet) is not removed from gso_skb. At 1Gbps speed, a full size TSO packet is 500 us of extra latency. In some cases, we know that all packets dequeued from a qdisc are for a particular and known txq : - If device is non multi queue - For all MQ/MQPRIO slave qdiscs This patch introduces a new qdisc flag, TCQ_F_ONETXQUEUE to mark this capability, so that dequeue_skb() is allowed to dequeue a packet only if the associated txq is not stopped. This indeed reduce latencies for high prio packets (or improve fairness with sfq/fq_codel), and almost remove qdisc 'requeues'. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
__copy_skb_header(nskb, p) already copied p->cb[], no need to copy it again. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Cong Wang 提交于
In case of rehashing, introduce a global variable 'br_mdb_rehash_seq' which gets increased every time when rehashing, and assign net->dev_base_seq + br_mdb_rehash_seq to cb->seq. In theory cb->seq could be wrapped to zero, but this is not easy to fix, as net->dev_base_seq is not visible inside br_mdb_rehash(). In practice, this is rare. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Graf <tgraf@suug.ch> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Abhijit Pawar 提交于
This patch removes the redundant occurences of simple_strto<foo> Signed-off-by: NAbhijit Pawar <abhi.c.pawar@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
__napi_gro_receive() is inlined from two call sites for no good reason. Lets move the prep stuff in a function of its own, called only if/when needed. This saves 300 bytes on x86 : # size net/core/dev.o.after net/core/dev.o.before text data bss dec hex filename 51968 1238 1040 54246 d3e6 net/core/dev.o.before 51664 1238 1040 53942 d2b6 net/core/dev.o.after Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2012 6 次提交
-
-
由 Johannes Berg 提交于
Instead of updating stats before sending a packet, update them after processing the packet's status. This makes minstrel in line with minstrel_ht. Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 Johannes Berg 提交于
Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
-
由 John Fastabend 提交于
Allow DCB and net namespace to work together. This is useful if you have containers that are bound to 'phys' interfaces that want to also manage their DCB attributes. The net namespace is taken from sock_net(skb->sk) of the netlink skb. CC: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Abhijit Pawar 提交于
This patch replace the obsolete simple_strto<foo> with kstrto<foo> Signed-off-by: NAbhijit Pawar <abhi.c.pawar@gmail.com> Acked-by: NNeil Horman <nhorman@tuxdriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
ip_check_defrag() might be called from af_packet within the RX path where shared SKBs are used, so it must not modify the input SKB before it has unshared it for defragmentation. Use skb_copy_bits() to get the IP header and only pull in everything later. The same is true for the other caller in macvlan as it is called from dev->rx_handler which can also get a shared SKB. Reported-by: NEric Leblond <eric@regit.org> Cc: stable@vger.kernel.org Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
We pass IFLA_BRPORT_MAX to nla_parse_nested() so we need IFLA_BRPORT_MAX + 1 elements. Also Smatch complains that we read past the end of the array when in br_set_port_flag() when it's called with IFLA_BRPORT_FAST_LEAVE. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Acked-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 12月, 2012 4 次提交
-
-
由 Neal Cardwell 提交于
Add logic to verify that a port comparison byte code operation actually has the second inet_diag_bc_op from which we read the port for such operations. Previously the code blindly referenced op[1] without first checking whether a second inet_diag_bc_op struct could fit there. So a malicious user could make the kernel read 4 bytes beyond the end of the bytecode array by claiming to have a whole port comparison byte code (2 inet_diag_bc_op structs) when in fact the bytecode was not long enough to hold both. Signed-off-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Neal Cardwell 提交于
Add logic to check the address family of the user-supplied conditional and the address family of the connection entry. We now do not do prefix matching of addresses from different address families (AF_INET vs AF_INET6), except for the previously existing support for having an IPv4 prefix match an IPv4-mapped IPv6 address (which this commit maintains as-is). This change is needed for two reasons: (1) The addresses are different lengths, so comparing a 128-bit IPv6 prefix match condition to a 32-bit IPv4 connection address can cause us to unwittingly walk off the end of the IPv4 address and read garbage or oops. (2) The IPv4 and IPv6 address spaces are semantically distinct, so a simple bit-wise comparison of the prefixes is not meaningful, and would lead to bogus results (except for the IPv4-mapped IPv6 case, which this commit maintains). Signed-off-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Neal Cardwell 提交于
Add logic to validate INET_DIAG_BC_S_COND and INET_DIAG_BC_D_COND operations. Previously we did not validate the inet_diag_hostcond, address family, address length, and prefix length. So a malicious user could make the kernel read beyond the end of the bytecode array by claiming to have a whole inet_diag_hostcond when the bytecode was not long enough to contain a whole inet_diag_hostcond of the given address family. Or they could make the kernel read up to about 27 bytes beyond the end of a connection address by passing a prefix length that exceeded the length of addresses of the given family. Signed-off-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Neal Cardwell 提交于
Fix inet_diag to be aware of the fact that AF_INET6 TCP connections instantiated for IPv4 traffic and in the SYN-RECV state were actually created with inet_reqsk_alloc(), instead of inet6_reqsk_alloc(). This means that for such connections inet6_rsk(req) returns a pointer to a random spot in memory up to roughly 64KB beyond the end of the request_sock. With this bug, for a server using AF_INET6 TCP sockets and serving IPv4 traffic, an inet_diag user like `ss state SYN-RECV` would lead to inet_diag_fill_req() causing an oops or the export to user space of 16 bytes of kernel memory as a garbage IPv6 address, depending on where the garbage inet6_rsk(req) pointed. Signed-off-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 12月, 2012 4 次提交
-
-
由 Ben Hutchings 提交于
Use the device model to get just the name, rather than using the ethtool API to get all driver information. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
In cfusbl_device_notify(), the usbnet and usbdev variables are initialised before the driver name has been checked. In case the device's driver is not cdc_ncm, this may result in reading beyond the end of the netdev private area. Move the initialisation below the driver name check. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This change allows the VXLAN to enable Tx checksum offloading even on devices that do not support encapsulated checksum offloads. The advantage to this is that it allows for the lower device to change due to routing table changes without impacting features on the VXLAN itself. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Joseph Gasparakis 提交于
This patch adds support in the kernel for offloading in the NIC Tx and Rx checksumming for encapsulated packets (such as VXLAN and IP GRE). For Tx encapsulation offload, the driver will need to set the right bits in netdev->hw_enc_features. The protocol driver will have to set the skb->encapsulation bit and populate the inner headers, so the NIC driver will use those inner headers to calculate the csum in hardware. For Rx encapsulation offload, the driver will need to set again the skb->encapsulation flag and the skb->ip_csum to CHECKSUM_UNNECESSARY. In that case the protocol driver should push the decapsulated packet up to the stack, again with CHECKSUM_UNNECESSARY. In ether case, the protocol driver should set the skb->encapsulation flag back to zero. Finally the protocol driver should have NETIF_F_RXCSUM flag set in its features. Signed-off-by: NJoseph Gasparakis <joseph.gasparakis@intel.com> Signed-off-by: NPeter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 12月, 2012 11 次提交
-
-
由 Paul Gortmaker 提交于
In TIPC's accept() routine, there is a large block of code relating to initialization of a new socket, all within an if condition checking if the allocation succeeded. Here, we simply flip the check of the if, so that the main execution path stays at the same indentation level, which improves readability. If the allocation fails, we jump to an already existing exit label. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Ying Xue 提交于
TIPC accept() call grabs the socket lock on a newly allocated socket while holding the socket lock on an old socket. But lockdep worries that this might be a recursive lock attempt: [ INFO: possible recursive locking detected ] --------------------------------------------- kworker/u:0/6 is trying to acquire lock: (sk_lock-AF_TIPC){+.+.+.}, at: [<c8c1226c>] accept+0x15c/0x310 [tipc] but task is already holding lock: (sk_lock-AF_TIPC){+.+.+.}, at: [<c8c12138>] accept+0x28/0x310 [tipc] other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(sk_lock-AF_TIPC); lock(sk_lock-AF_TIPC); *** DEADLOCK *** May be due to missing lock nesting notation [...] Tell lockdep that this locking is safe by using lock_sock_nested(). This is similar to what was done in commit 5131a184 for SCTP code ("SCTP: lock_sock_nested in sctp_sock_migrate"). Also note that this is isn't something that is seen normally, as it was uncovered with some experimental work-in-progress code not yet ready for mainline. So no need for stable backports or similar of this commit. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Ying Xue 提交于
As connection setup is now completed asynchronously in BH context, in the function filter_connect(), the corresponding code in recv_msg() becomes redundant. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Ying Xue 提交于
TIPC has so far only supported blocking connect(), meaning that a call to connect() doesn't return until either the connection is fully established, or an error occurs. This has proved insufficient for many users, so we now introduce non-blocking connect(), analogous to how this is done in TCP and other protocols. With this feature, if a connection cannot be established instantly, connect() will return the error code "-EINPROGRESS". If the user later calls connect() again, he will either have the return code "-EALREADY" or "-EISCONN", depending on whether the connection has been established or not. The user must have explicitly set the socket to be non-blocking (SOCK_NONBLOCK or O_NONBLOCK, depending on method used), so unless for some reason they had set this already (the socket would anyway remain blocking in current TIPC) this change should be completely backwards compatible. It is also now possible to call select() or poll() to wait for the completion of a connection. An effect of the above is that the actual completion of a connection may now be performed asynchronously, independent of the calls from user space. Therefore, we now execute this code in BH context, in the function filter_rcv(), which is executed upon reception of messages in the socket. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> [PG: minor refactoring for improved connect/disconnect function names] Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Ying Xue 提交于
Handling of connection-related message reception is currently scattered around at different places in the code. This makes it harder to verify that things are handled correctly in all possible scenarios. So we consolidate the existing processing of connection-oriented message reception in a single routine. In the process, we convert the chain of if/else into a switch/case for improved readability. A cast on the socket_state in the switch is needed to avoid compile warnings on 32 bit, like "net/tipc/socket.c:1252:2: warning: case value ‘4294967295’ not in enumerated type". This happens because existing tipc code pseudo extends the default linux socket state values with: #define SS_LISTENING -1 /* socket is listening */ #define SS_READY -2 /* socket is connectionless */ It may make sense to add these as _positive_ values to the existing socket state enum list someday, vs. these already existing defines. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> [PG: add cast to fix warning; remove returns from middle of switch] Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Paul Gortmaker 提交于
Currently we have tipc_disconnect and tipc_disconnect_port. It is not clear from the names alone, what they do or how they differ. It turns out that tipc_disconnect just deals with the port locking and then calls tipc_disconnect_port which does all the work. If we rename as follows: tipc_disconnect_port --> __tipc_disconnect then we will be following typical linux convention, where: __tipc_disconnect: "raw" function that does all the work. tipc_disconnect: wrapper that deals with locking and then calls the real core __tipc_disconnect function With this, the difference is immediately evident, and locking violations are more apt to be spotted by chance while working on, or even just while reading the code. On the connect side of things, we currently only have the single "tipc_connect2port" function. It does both the locking at enter/exit, and the core of the work. Pending changes will make it desireable to have the connect be a two part locking wrapper + worker function, just like the disconnect is already. Here, we make the connect look just like the updated disconnect case, for the above reason, and for consistency. In the process, we also get rid of the "2port" suffix that was on the original name, since it adds no descriptive value. On close examination, one might notice that the above connect changes implicitly move the call to tipc_link_get_max_pkt() to be within the scope of tipc_port_lock() protected region; when it was not previously. We don't see any issues with this, and it is in keeping with __tipc_connect doing the work and tipc_connect just handling the locking. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Jon Maloy 提交于
The sk_recv_queue upper limit for connectionless sockets has empirically turned out to be too low. When we double the current limit we get much fewer rejected messages and no noticable negative side-effects. Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
由 Eric Dumazet 提交于
commit 2e71a6f8 (net: gro: selective flush of packets) added a bug for skbs using frag_list. This part of the GRO stack is rarely used, as it needs skb not using a page fragment for their skb->head. Most drivers do use a page fragment, but some of them use GFP_KERNEL allocations for the initial fill of their RX ring buffer. napi_gro_flush() overwrite skb->prev that was used for these skb to point to the last skb in frag_list. Fix this using a separate field in struct napi_gro_cb to point to the last fragment. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
If SYN-ACK partially acks SYN-data, the client retransmits the remaining data by tcp_retransmit_skb(). This increments lost recovery state variables like tp->retrans_out in Open state. If loss recovery happens before the retransmission is acked, it triggers the WARN_ON check in tcp_fastretrans_alert(). For example: the client sends SYN-data, gets SYN-ACK acking only ISN, retransmits data, sends another 4 data packets and get 3 dupacks. Since the retransmission is not caused by network drop it should not update the recovery state variables. Further the server may return a smaller MSS than the cached MSS used for SYN-data, so the retranmission needs a loop. Otherwise some data will not be retransmitted until timeout or other loss recovery events. Signed-off-by: NYuchung Cheng <ycheng@google.com> Acked-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Cong Wang 提交于
V5: fix two bugs pointed out by Thomas remove seq check for now, mark it as TODO V4: remove some useless #include some coding style fix V3: drop debugging printk's update selinux perm table as well V2: drop patch 1/2, export ifindex directly Redesign netlink attributes Improve netlink seq check Handle IPv6 addr as well This patch exports bridge multicast database via netlink message type RTM_GETMDB. Similar to fdb, but currently bridge-specific. We may need to support modify multicast database too (RTM_{ADD,DEL}MDB). (Thanks to Thomas for patient reviews) Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Thomas Graf <tgraf@suug.ch> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NCong Wang <amwang@redhat.com> Acked-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ying Xue 提交于
As a complement to the per-socket sk_recv_queue limit, TIPC keeps a global atomic counter for the sum of sk_recv_queue sizes across all tipc sockets. When incremented, the counter is compared to an upper threshold value, and if this is reached, the message is rejected with error code TIPC_OVERLOAD. This check was originally meant to protect the node against buffer exhaustion and general CPU overload. However, all experience indicates that the feature not only is redundant on Linux, but even harmful. Users run into the limit very often, causing disturbances for their applications, while removing it seems to have no negative effects at all. We have also seen that overall performance is boosted significantly when this bottleneck is removed. Furthermore, we don't see any other network protocols maintaining such a mechanism, something strengthening our conviction that this control can be eliminated. As a result, the atomic variable tipc_queue_size is now unused and so it can be deleted. There is a getsockopt call that used to allow reading it; we retain that but just return zero for maximum compatibility. Signed-off-by: NYing Xue <ying.xue@windriver.com> Signed-off-by: NJon Maloy <jon.maloy@ericsson.com> Cc: Neil Horman <nhorman@tuxdriver.com> [PG: phase out tipc_queue_size as pointed out by Neil Horman] Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-