- 09 5月, 2012 12 次提交
-
-
由 Ashok Nagarajan 提交于
This patch introduces a new mesh configuration parameter "ht_opmode" and will allow user to check the current HT protection mode selected. Users could configure the protection mode by the command "iw mesh_iface set mesh_param mesh_ht_protection_mode=2". The default protection mode of mesh is set to non-HT mixed mode. Signed-off-by: NAshok Nagarajan <ashok@cozybit.com> Reviewed-by: NThomas Pedersen <thomas@cozybit.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Ben Greear 提交于
This adds hooks to call into the driver to get additional stats for the ethtool API. Signed-off-by: NBen Greear <greearb@candelatech.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Ben Greear 提交于
Signed-off-by: NBen Greear <greearb@candelatech.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Pablo Neira Ayuso 提交于
This patch removes ip_queue support which was marked as obsolete years ago. The nfnetlink_queue modules provides more advanced user-space packet queueing mechanism. This patch also removes capability code included in SELinux that refers to ip_queue. Otherwise, we break compilation. Several warning has been sent regarding this to the mailing list in the past month without anyone rising the hand to stop this with some strong argument. Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Pablo Neira Ayuso 提交于
Explicit helper attachment via the CT target is broken with NAT if non-standard ports are used. This problem was hidden behind the automatic helper assignment routine. Thus, it becomes more noticeable now that we can disable the automatic helper assignment with Eric Leblond's: 9e8ac5a netfilter: nf_ct_helper: allow to disable automatic helper assignment Basically, nf_conntrack_alter_reply asks for looking up the helper up if NAT is enabled. Unfortunately, we don't have the conntrack template at that point anymore. Since we don't want to rely on the automatic helper assignment, we can skip the second look-up and stick to the helper that was attached by iptables. With the CT target, the user is in full control of helper attachment, thus, the policy is to trust what the user explicitly configures via iptables (no automatic magic anymore). Interestingly, this bug was hidden by the automatic helper look-up code. But it can be easily trigger if you attach the helper in a non-standard port, eg. iptables -I PREROUTING -t raw -p tcp --dport 8888 \ -j CT --helper ftp And you disabled the automatic helper assignment. I added the IPS_HELPER_BIT that allows us to differenciate between a helper that has been explicitly attached and those that have been automatically assigned. I didn't come up with a better solution (having backward compatibility in mind). Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Pablo Neira Ayuso 提交于
Allow master and backup servers to use many threads for sync traffic. Add sysctl var "sync_ports" to define the number of threads. Every thread will use single UDP port, thread 0 will use the default port 8848 while last thread will use port 8848+sync_ports-1. The sync traffic for connections is scheduled to many master threads based on the cp address but one connection is always assigned to same thread to avoid reordering of the sync messages. Remove ip_vs_sync_switch_mode because this check for sync mode change is still risky. Instead, check for mode change under sync_buff_lock. Make sure the backup socks do not block on reading. Special thanks to Aleksey Chudov for helping in all tests. Signed-off-by: NJulian Anastasov <ja@ssi.bg> Tested-by: NAleksey Chudov <aleksey.chudov@gmail.com> Signed-off-by: NSimon Horman <horms@verge.net.au>
-
由 Julian Anastasov 提交于
Add two new sysctl vars to control the sync rate with the main idea to reduce the rate for connection templates because currently it depends on the packet rate for controlled connections. This mechanism should be useful also for normal connections with high traffic. sync_refresh_period: in seconds, difference in reported connection timer that triggers new sync message. It can be used to avoid sync messages for the specified period (or half of the connection timeout if it is lower) if connection state is not changed from last sync. sync_retries: integer, 0..3, defines sync retries with period of sync_refresh_period/8. Useful to protect against loss of sync messages. Allow sysctl_sync_threshold to be used with sysctl_sync_period=0, so that only single sync message is sent if sync_refresh_period is also 0. Add new field "sync_endtime" in connection structure to hold the reported time when connection expires. The 2 lowest bits will represent the retry count. As the sysctl_sync_period now can be 0 use ACCESS_ONCE to avoid division by zero. Special thanks to Aleksey Chudov for being patient with me, for his extensive reports and helping in all tests. Signed-off-by: NJulian Anastasov <ja@ssi.bg> Tested-by: NAleksey Chudov <aleksey.chudov@gmail.com> Signed-off-by: NSimon Horman <horms@verge.net.au>
-
由 Pablo Neira Ayuso 提交于
High rate of sync messages in master can lead to overflowing the socket buffer and dropping the messages. Fixed sleep of 1 second without wakeup events is not suitable for loaded masters, Use delayed_work to schedule sending for queued messages and limit the delay to IPVS_SYNC_SEND_DELAY (20ms). This will reduce the rate of wakeups but to avoid sending long bursts we wakeup the master thread after IPVS_SYNC_WAKEUP_RATE (8) messages. Add hard limit for the queued messages before sending by using "sync_qlen_max" sysctl var. It defaults to 1/32 of the memory pages but actually represents number of messages. It will protect us from allocating large parts of memory when the sending rate is lower than the queuing rate. As suggested by Pablo, add new sysctl var "sync_sock_size" to configure the SNDBUF (master) or RCVBUF (slave) socket limit. Default value is 0 (preserve system defaults). Change the master thread to detect and block on SNDBUF overflow, so that we do not drop messages when the socket limit is low but the sync_qlen_max limit is not reached. On ENOBUFS or other errors just drop the messages. Change master thread to enter TASK_INTERRUPTIBLE state early, so that we do not miss wakeups due to messages or kthread_should_stop event. Thanks to Pablo Neira Ayuso for his valuable feedback! Signed-off-by: NJulian Anastasov <ja@ssi.bg> Signed-off-by: NSimon Horman <horms@verge.net.au>
-
由 Julian Anastasov 提交于
As the goal is to mirror the inactconns/activeconns counters in the backup server, make sure the cp->flags are updated even if cp is still not bound to dest. If cp->flags are not updated ip_vs_bind_dest will rely only on the initial flags when updating the counters. To avoid mistakes and complicated checks for protocol state rely only on the IP_VS_CONN_F_INACTIVE bit when updating the counters. Signed-off-by: NJulian Anastasov <ja@ssi.bg> Tested-by: NAleksey Chudov <aleksey.chudov@gmail.com> Signed-off-by: NSimon Horman <horms@verge.net.au>
-
由 Eric Dumazet 提交于
this_cpu_inc() is IRQ safe and faster than local_bh_disable()/__this_cpu_inc()/local_bh_enable(), at least on x86. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Patrick McHardy <kaber@trash.net> Cc: Christoph Lameter <cl@linux.com> Cc: Tejun Heo <tj@kernel.org> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Eric Leblond 提交于
This patch allows you to disable automatic conntrack helper lookup based on TCP/UDP ports, eg. echo 0 > /proc/sys/net/netfilter/nf_conntrack_helper [ Note: flows that already got a helper will keep using it even if automatic helper assignment has been disabled ] Once this behaviour has been disabled, you have to explicitly use the iptables CT target to attach helper to flows. There are good reasons to stop supporting automatic helper assignment, for further information, please read: http://www.netfilter.org/news.html#2012-04-03 This patch also adds one message to inform that automatic helper assignment is deprecated and it will be removed soon (this is spotted only once, with the first flow that gets a helper attached to make it as less annoying as possible). Signed-off-by: NEric Leblond <eric@regit.org> Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
-
由 Joe Perches 提交于
Make the return value explicitly true or false. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 5月, 2012 3 次提交
-
-
由 David Daney 提交于
This patch adds a somewhat generic framework for MDIO bus multiplexers. It is modeled on the I2C multiplexer. The multiplexer is needed if there are multiple PHYs with the same address connected to the same MDIO bus adepter, or if there is insufficient electrical drive capability for all the connected PHY devices. Conceptually it could look something like this: ------------------ | Control Signal | --------+--------- | --------------- --------+------ | MDIO MASTER |---| Multiplexer | --------------- --+-------+---- | | C C h h i i l l d d | | --------- A B --------- | | | | | | | PHY@1 +-------+ +---+ PHY@1 | | | | | | | --------- | | --------- --------- | | --------- | | | | | | | PHY@2 +-------+ +---+ PHY@2 | | | | | --------- --------- This framework configures the bus topology from device tree data. The mechanics of switching the multiplexer is left to device specific drivers. The follow-on patch contains a multiplexer driven by GPIO lines. Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Daney 提交于
Add of_mdio_find_bus() which allows an mii_bus to be located given its associated the device tree node. This is needed by the follow-on patch to add a driver for MDIO bus multiplexers. The of_mdiobus_register() function is modified so that the device tree node is recorded in the mii_bus. Then we can find it again by iterating over all mdio_bus_class devices. Because the OF device tree has now become an integral part of the kernel, this can live in mdio_bus.c (which contains the needed mdio_bus_class structure) instead of of_mdio.c. Signed-off-by: NDavid Daney <david.daney@cavium.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
Neither compare_ether_addr() nor compare_ether_addr_64bits() (as it can fall back to the former) have comparison semantics like memcmp() where the sign of the return value indicates sort order. We had a bug in the wireless code due to a blind memcmp replacement because of this. A cursory look suggests that the wireless bug was the only one due to this semantic difference. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 5月, 2012 1 次提交
-
-
由 Alexander Duyck 提交于
With the recent changes for how we compute the skb truesize it occurs to me we are probably going to have a lot of calls to skb_end_pointer - skb->head. Instead of running all over the place doing that it would make more sense to just make it a separate inline skb_end_offset(skb) that way we can return the correct value without having gcc having to do all the optimization to cancel out skb->head - skb->head. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 5月, 2012 1 次提交
-
-
由 Eric Dumazet 提交于
It appears some networks play bad games with the two bits reserved for ECN. This can trigger false congestion notifications and very slow transferts. Since RFC 3168 (6.1.1) forbids SYN packets to carry CT bits, we can disable TCP ECN negociation if it happens we receive mangled CT bits in the SYN packet. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Perry Lorier <perryl@google.com> Cc: Matt Mathis <mattmathis@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Wilmer van der Gaast <wilmer@google.com> Cc: Ankur Jain <jankur@google.com> Cc: Tom Herbert <therbert@google.com> Cc: Dave Täht <dave.taht@bufferbloat.net> Acked-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 5月, 2012 4 次提交
-
-
由 Karsten Keil 提交于
With multiple cards is hard to figure out which port caused trouble int the layer2 routines (e.g. got a timeout). Now we have the informations in the log output. Signed-off-by: NKarsten Keil <kkeil@linux-pingi.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Keil 提交于
For certification test it is very useful to change the layer1 timer3 value on runtime. Signed-off-by: NKarsten Keil <kkeil@linux-pingi.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Karsten Keil 提交于
To be full preemptiv safe, we cannot handle a L2 timeout in the timer context itself, we should do all actions via the D-channel thread. Signed-off-by: NKarsten Keil <kkeil@linux-pingi.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
This patch adds support for a skb_head_is_locked helper function. It is meant to be used any time we are considering transferring the head from skb->head to a paged frag. If the head is locked it means we cannot remove the head from the skb so it must be copied or we must take the skb as a whole. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 5月, 2012 3 次提交
-
-
由 Eric Dumazet 提交于
Extend tcp coalescing implementing it from tcp_queue_rcv(), the main receiver function when application is not blocked in recvmsg(). Function tcp_queue_rcv() is moved a bit to allow its call from tcp_data_queue() This gives good results especially if GRO could not kick, and if skb head is a fragment. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Tom Herbert <therbert@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
Implementing the advanced early retransmit (sysctl_tcp_early_retrans==2). Delays the fast retransmit by an interval of RTT/4. We borrow the RTO timer to implement the delay. If we receive another ACK or send a new packet, the timer is cancelled and restored to original RTO value offset by time elapsed. When the delayed-ER timer fires, we enter fast recovery and perform fast retransmit. Signed-off-by: NYuchung Cheng <ycheng@google.com> Acked-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yuchung Cheng 提交于
This patch implements RFC 5827 early retransmit (ER) for TCP. It reduces DUPACK threshold (dupthresh) if outstanding packets are less than 4 to recover losses by fast recovery instead of timeout. While the algorithm is simple, small but frequent network reordering makes this feature dangerous: the connection repeatedly enter false recovery and degrade performance. Therefore we implement a mitigation suggested in the appendix of the RFC that delays entering fast recovery by a small interval, i.e., RTT/4. Currently ER is conservative and is disabled for the rest of the connection after the first reordering event. A large scale web server experiment on the performance impact of ER is summarized in section 6 of the paper "Proportional Rate Reduction for TCP”, IMC 2011. http://conferences.sigcomm.org/imc/2011/docs/p155.pdf Note that Linux has a similar feature called THIN_DUPACK. The differences are THIN_DUPACK do not mitigate reorderings and is only used after slow start. Currently ER is disabled if THIN_DUPACK is enabled. I would be happy to merge THIN_DUPACK feature with ER if people think it's a good idea. ER is enabled by sysctl_tcp_early_retrans: 0: Disables ER 1: Reduce dupthresh to packets_out - 1 when outstanding packets < 4. 2: (Default) reduce dupthresh like mode 1. In addition, delay entering fast recovery by RTT/4. Note: mode 2 is implemented in the third part of this patch series. Signed-off-by: NYuchung Cheng <ycheng@google.com> Acked-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 5月, 2012 10 次提交
-
-
由 Eric Dumazet 提交于
fix kernel doc typos in function names Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Add ECN (Explicit Congestion Notification) marking capability to netem tc qdisc add dev eth0 root netem drop 0.5 ecn Instead of dropping packets, try to ECN mark them. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Tom Herbert <therbert@google.com> Cc: Hagen Paul Pfeifer <hagen@jauu.net> Cc: Stephen Hemminger <shemminger@vyatta.com> Acked-by: NHagen Paul Pfeifer <hagen@jauu.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
remove useless casts and rename variables for less confusion. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Chris Elston 提交于
L2TPv3 defines an IP encapsulation packet format where data is carried directly over IP (no UDP). The kernel already has support for L2TP IP encapsulation over IPv4 (l2tp_ip). This patch introduces support for L2TP IP encapsulation over IPv6. The implementation is derived from ipv6/raw and ipv4/l2tp_ip. Signed-off-by: NChris Elston <celston@katalix.com> Signed-off-by: NJames Chapman <jchapman@katalix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Chris Elston 提交于
This patch adds support for unmanaged L2TPv3 tunnels over IPv6 using the netlink API. We already support unmanaged L2TPv3 tunnels over IPv4. A patch to iproute2 to make use of this feature will be submitted separately. Signed-off-by: NChris Elston <celston@katalix.com> Signed-off-by: NJames Chapman <jchapman@katalix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 James Chapman 提交于
Checkpatch warns about the use of __attribute__((packed)). So use the recommended __packed syntax instead. Signed-off-by: NJames Chapman <jchapman@katalix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
GRO can check if skb to be merged has its skb->head mapped to a page fragment, instead of a kmalloc() area. We 'upgrade' skb->head as a fragment in itself This avoids the frag_list fallback, and permits to build true GRO skb (one sk_buff and up to 16 fragments), using less memory. This reduces number of cache misses when user makes its copy, since a single sk_buff is fetched. This is a followup of patch "net: allow skb->head to be a page fragment" Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Maciej Żenczykowski <maze@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Tom Herbert <therbert@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Ben Hutchings <bhutchings@solarflare.com> Cc: Matt Carlson <mcarlson@broadcom.com> Cc: Michael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
skb->head is currently allocated from kmalloc(). This is convenient but has the drawback the data cannot be converted to a page fragment if needed. We have three spots were it hurts : 1) GRO aggregation When a linear skb must be appended to another skb, GRO uses the frag_list fallback, very inefficient since we keep all struct sk_buff around. So drivers enabling GRO but delivering linear skbs to network stack aren't enabling full GRO power. 2) splice(socket -> pipe). We must copy the linear part to a page fragment. This kind of defeats splice() purpose (zero copy claim) 3) TCP coalescing. Recently introduced, this permits to group several contiguous segments into a single skb. This shortens queue lengths and save kernel memory, and greatly reduce probabilities of TCP collapses. This coalescing doesnt work on linear skbs (or we would need to copy data, this would be too slow) Given all these issues, the following patch introduces the possibility of having skb->head be a fragment in itself. We use a new skb flag, skb->head_frag to carry this information. build_skb() is changed to accept a frag_size argument. Drivers willing to provide a page fragment instead of kmalloc() data will set a non zero value, set to the fragment size. Then, on situations we need to convert the skb head to a frag in itself, we can check if skb->head_frag is set and avoid the copies or various fallbacks we have. This means drivers currently using frags could be updated to avoid the current skb->head allocation and reduce their memory footprint (aka skb truesize). (thats 512 or 1024 bytes saved per skb). This also makes bpf/netfilter faster since the 'first frag' will be part of skb linear part, no need to copy data. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Maciej Żenczykowski <maze@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Tom Herbert <therbert@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Ben Hutchings <bhutchings@solarflare.com> Cc: Matt Carlson <mcarlson@broadcom.com> Cc: Michael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Matthew Garrett 提交于
More recent versions of the UEFI spec have added new attributes for variables. Add them. Signed-off-by: NMatthew Garrett <mjg@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Eric Dumazet 提交于
Denys Fedoryshchenko reported frequent crashes on a proxy server and kindly provided a lockdep report that explains it all : [ 762.903868] [ 762.903880] ================================= [ 762.903890] [ INFO: inconsistent lock state ] [ 762.903903] 3.3.4-build-0061 #8 Not tainted [ 762.904133] --------------------------------- [ 762.904344] inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-W} usage. [ 762.904542] squid/1603 [HC0[0]:SC0[0]:HE1:SE1] takes: [ 762.904542] (key#3){+.?...}, at: [<c0232cc4>] __percpu_counter_sum+0xd/0x58 [ 762.904542] {IN-SOFTIRQ-W} state was registered at: [ 762.904542] [<c0158b84>] __lock_acquire+0x284/0xc26 [ 762.904542] [<c01598e8>] lock_acquire+0x71/0x85 [ 762.904542] [<c0349765>] _raw_spin_lock+0x33/0x40 [ 762.904542] [<c0232c93>] __percpu_counter_add+0x58/0x7c [ 762.904542] [<c02cfde1>] sk_clone_lock+0x1e5/0x200 [ 762.904542] [<c0303ee4>] inet_csk_clone_lock+0xe/0x78 [ 762.904542] [<c0315778>] tcp_create_openreq_child+0x1b/0x404 [ 762.904542] [<c031339c>] tcp_v4_syn_recv_sock+0x32/0x1c1 [ 762.904542] [<c031615a>] tcp_check_req+0x1fd/0x2d7 [ 762.904542] [<c0313f77>] tcp_v4_do_rcv+0xab/0x194 [ 762.904542] [<c03153bb>] tcp_v4_rcv+0x3b3/0x5cc [ 762.904542] [<c02fc0c4>] ip_local_deliver_finish+0x13a/0x1e9 [ 762.904542] [<c02fc539>] NF_HOOK.clone.11+0x46/0x4d [ 762.904542] [<c02fc652>] ip_local_deliver+0x41/0x45 [ 762.904542] [<c02fc4d1>] ip_rcv_finish+0x31a/0x33c [ 762.904542] [<c02fc539>] NF_HOOK.clone.11+0x46/0x4d [ 762.904542] [<c02fc857>] ip_rcv+0x201/0x23e [ 762.904542] [<c02daa3a>] __netif_receive_skb+0x319/0x368 [ 762.904542] [<c02dac07>] netif_receive_skb+0x4e/0x7d [ 762.904542] [<c02dacf6>] napi_skb_finish+0x1e/0x34 [ 762.904542] [<c02db122>] napi_gro_receive+0x20/0x24 [ 762.904542] [<f85d1743>] e1000_receive_skb+0x3f/0x45 [e1000e] [ 762.904542] [<f85d3464>] e1000_clean_rx_irq+0x1f9/0x284 [e1000e] [ 762.904542] [<f85d3926>] e1000_clean+0x62/0x1f4 [e1000e] [ 762.904542] [<c02db228>] net_rx_action+0x90/0x160 [ 762.904542] [<c012a445>] __do_softirq+0x7b/0x118 [ 762.904542] irq event stamp: 156915469 [ 762.904542] hardirqs last enabled at (156915469): [<c019b4f4>] __slab_alloc.clone.58.clone.63+0xc4/0x2de [ 762.904542] hardirqs last disabled at (156915468): [<c019b452>] __slab_alloc.clone.58.clone.63+0x22/0x2de [ 762.904542] softirqs last enabled at (156915466): [<c02ce677>] lock_sock_nested+0x64/0x6c [ 762.904542] softirqs last disabled at (156915464): [<c0349914>] _raw_spin_lock_bh+0xe/0x45 [ 762.904542] [ 762.904542] other info that might help us debug this: [ 762.904542] Possible unsafe locking scenario: [ 762.904542] [ 762.904542] CPU0 [ 762.904542] ---- [ 762.904542] lock(key#3); [ 762.904542] <Interrupt> [ 762.904542] lock(key#3); [ 762.904542] [ 762.904542] *** DEADLOCK *** [ 762.904542] [ 762.904542] 1 lock held by squid/1603: [ 762.904542] #0: (sk_lock-AF_INET){+.+.+.}, at: [<c03055c0>] lock_sock+0xa/0xc [ 762.904542] [ 762.904542] stack backtrace: [ 762.904542] Pid: 1603, comm: squid Not tainted 3.3.4-build-0061 #8 [ 762.904542] Call Trace: [ 762.904542] [<c0347b73>] ? printk+0x18/0x1d [ 762.904542] [<c015873a>] valid_state+0x1f6/0x201 [ 762.904542] [<c0158816>] mark_lock+0xd1/0x1bb [ 762.904542] [<c015876b>] ? mark_lock+0x26/0x1bb [ 762.904542] [<c015805d>] ? check_usage_forwards+0x77/0x77 [ 762.904542] [<c0158bf8>] __lock_acquire+0x2f8/0xc26 [ 762.904542] [<c0159b8e>] ? mark_held_locks+0x5d/0x7b [ 762.904542] [<c0159cf6>] ? trace_hardirqs_on+0xb/0xd [ 762.904542] [<c0158dd4>] ? __lock_acquire+0x4d4/0xc26 [ 762.904542] [<c01598e8>] lock_acquire+0x71/0x85 [ 762.904542] [<c0232cc4>] ? __percpu_counter_sum+0xd/0x58 [ 762.904542] [<c0349765>] _raw_spin_lock+0x33/0x40 [ 762.904542] [<c0232cc4>] ? __percpu_counter_sum+0xd/0x58 [ 762.904542] [<c0232cc4>] __percpu_counter_sum+0xd/0x58 [ 762.904542] [<c02cebc4>] __sk_mem_schedule+0xdd/0x1c7 [ 762.904542] [<c02d178d>] ? __alloc_skb+0x76/0x100 [ 762.904542] [<c0305e8e>] sk_wmem_schedule+0x21/0x2d [ 762.904542] [<c0306370>] sk_stream_alloc_skb+0x42/0xaa [ 762.904542] [<c0306567>] tcp_sendmsg+0x18f/0x68b [ 762.904542] [<c031f3dc>] ? ip_fast_csum+0x30/0x30 [ 762.904542] [<c0320193>] inet_sendmsg+0x53/0x5a [ 762.904542] [<c02cb633>] sock_aio_write+0xd2/0xda [ 762.904542] [<c015876b>] ? mark_lock+0x26/0x1bb [ 762.904542] [<c01a1017>] do_sync_write+0x9f/0xd9 [ 762.904542] [<c01a2111>] ? file_free_rcu+0x2f/0x2f [ 762.904542] [<c01a17a1>] vfs_write+0x8f/0xab [ 762.904542] [<c01a284d>] ? fget_light+0x75/0x7c [ 762.904542] [<c01a1900>] sys_write+0x3d/0x5e [ 762.904542] [<c0349ec9>] syscall_call+0x7/0xb [ 762.904542] [<c0340000>] ? rp_sidt+0x41/0x83 Bug is that sk_sockets_allocated_read_positive() calls percpu_counter_sum_positive() without BH being disabled. This bug was added in commit 180d8cd9 (foundations of per-cgroup memory pressure controlling.), since previous code was using percpu_counter_read_positive() which is IRQ safe. In __sk_mem_schedule() we dont need the precise count of allocated sockets and can revert to previous behavior. Reported-by: NDenys Fedoryshchenko <denys@visp.net.lb> Sined-off-by: NEric Dumazet <edumazet@google.com> Cc: Glauber Costa <glommer@parallels.com> Acked-by: NNeal Cardwell <ncardwell@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 4月, 2012 3 次提交
-
-
由 Hans Schillstrom 提交于
Change order of init so netns init is ready when register ioctl and netlink. Ver2 Whitespace fixes and __init added. Reported-by: N"Ryan O'Hara" <rohara@redhat.com> Signed-off-by: NHans Schillstrom <hans.schillstrom@ericsson.com> Acked-by: NJulian Anastasov <ja@ssi.bg> Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NSimon Horman <horms@verge.net.au>
-
由 Hans Schillstrom 提交于
ip_vs_create_timeout_table() can return NULL All functions protocol init_netns is affected of this patch. Signed-off-by: NHans Schillstrom <hans.schillstrom@ericsson.com> Acked-by: NJulian Anastasov <ja@ssi.bg> Signed-off-by: NSimon Horman <horms@verge.net.au>
-
由 Linus Torvalds 提交于
The actual internal pipe implementation is already really about individual packets (called "pipe buffers"), and this simply exposes that as a special packetized mode. When we are in the packetized mode (marked by O_DIRECT as suggested by Alan Cox), a write() on a pipe will not merge the new data with previous writes, so each write will get a pipe buffer of its own. The pipe buffer is then marked with the PIPE_BUF_FLAG_PACKET flag, which in turn will tell the reader side to break the read at that boundary (and throw away any partial packet contents that do not fit in the read buffer). End result: as long as you do writes less than PIPE_BUF in size (so that the pipe doesn't have to split them up), you can now treat the pipe as a packet interface, where each read() system call will read one packet at a time. You can just use a sufficiently big read buffer (PIPE_BUF is sufficient, since bigger than that doesn't guarantee atomicity anyway), and the return value of the read() will naturally give you the size of the packet. NOTE! We do not support zero-sized packets, and zero-sized reads and writes to a pipe continue to be no-ops. Also note that big packets will currently be split at write time, but that the size at which that happens is not really specified (except that it's bigger than PIPE_BUF). Currently that limit is the system page size, but we might want to explicitly support bigger packets some day. The main user for this is going to be the autofs packet interface, allowing us to stop having to care so deeply about exact packet sizes (which have had bugs with 32/64-bit compatibility modes). But user space can create packetized pipes with "pipe2(fd, O_DIRECT)", which will fail with an EINVAL on kernels that do not support this interface. Tested-by: NMichael Tokarev <mjt@tls.msk.ru> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: David Miller <davem@davemloft.net> Cc: Ian Kent <raven@themaw.net> Cc: Thomas Meyer <thomas@m3y3r.de> Cc: stable@kernel.org # needed for systemd/autofs interaction fix Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 4月, 2012 2 次提交
-
-
由 Benjamin LaHaise 提交于
Now that encap_rcv() works on IPv6 UDP sockets, wire L2TP up to IPv6. Support has been tested with and without hardware offloading. This version fixes the L2TP over localhost issue with incorrect checksums being reported. Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Benjamin LaHaise 提交于
Now that the sematics of udpv6_queue_rcv_skb() match IPv4's udp_queue_rcv_skb(), introduce the UDP encap_rcv() hook for IPv6. Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 4月, 2012 1 次提交
-
-
由 Randy Dunlap 提交于
Fix kernel-doc warning in spi.h (copy/paste): Warning(include/linux/spi/spi.h:365): No description found for parameter 'unprepare_transfer_hardware' Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net> Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
-