- 08 2月, 2011 1 次提交
-
-
由 Felix Fietkau 提交于
Using skb_header_cloned to check if it's safe to write to the skb is not enough - mac80211 also touches the tailroom of the skb. Initially this check was only used to increase a counter, however this commit changed the code to also skip skb data reallocation if no extra head/tailroom was needed: commit 4cd06a34 mac80211: skip unnecessary pskb_expand_head calls It added a regression at least with iwl3945, which is fixed by this patch. Reported-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NFelix Fietkau <nbd@openwrt.org> Tested-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 03 2月, 2011 1 次提交
-
-
由 Johannes Berg 提交于
When the off-channel TX is done with remain-on-channel offloaded to hardware, the reported cookie is wrong as in that case we shouldn't use the SKB as the cookie but need to instead use the corresponding r-o-c cookie (XOR'ed with 2 to prevent API mismatches). Fix this by keeping track of the hw_roc_skb pointer just for the status processing and use the correct cookie to report in this case. We can't use the hw_roc_skb pointer itself because it is NULL'ed when the frame is transmitted to prevent it being used twice. This fixes a bug where the P2P state machine in the supplicant gets stuck because it never gets a correct result for its transmitted frame. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 26 1月, 2011 1 次提交
-
-
由 Felix Fietkau 提交于
Some drivers (e.g. ath9k) do not always disable beacons when they're supposed to. When an interface is changed using the change_interface op, the mode specific sdata part is in an undefined state and trying to get a beacon at this point can produce weird crashes. To fix this, add a check for ieee80211_sdata_running before using anything from the sdata. Signed-off-by: NFelix Fietkau <nbd@openwrt.org> Cc: stable@kernel.org Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 20 1月, 2011 8 次提交
-
-
由 Johan Hedberg 提交于
The conn->sec_level value is supposed to represent the current level of security that the connection has. However, by assigning to it before requesting authentication it will have the wrong value during the authentication procedure. To fix this a pending_sec_level variable is added which is used to track the desired security level while making sure that sec_level always represents the current level of security. Signed-off-by: NJohan Hedberg <johan.hedberg@nokia.com> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
由 Johan Hedberg 提交于
When there is an existing connection l2cap_check_security needs to be called to ensure that the security level of the new socket is fulfilled. Normally l2cap_do_start takes care of this, but that function doesn't get called for SOCK_RAW type sockets. This patch adds the necessary l2cap_check_security call to the appropriate branch in l2cap_do_connect. Signed-off-by: NJohan Hedberg <johan.hedberg@nokia.com> Acked-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
由 Johan Hedberg 提交于
The logic for determining the needed auth_type for an L2CAP socket is rather complicated and has so far been duplicated in l2cap_check_security as well as l2cap_do_connect. Additionally the l2cap_check_security code was completely missing the handling of SOCK_RAW type sockets. This patch creates a unified function for the evaluation and makes l2cap_do_connect and l2cap_check_security use that function. Signed-off-by: NJohan Hedberg <johan.hedberg@nokia.com> Acked-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
由 Johan Hedberg 提交于
If an existing connection has a MITM protection requirement (the first bit of the auth_type) then that requirement should not be cleared by new sockets that reuse the ACL but don't have that requirement. Signed-off-by: NJohan Hedberg <johan.hedberg@nokia.com> Acked-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
由 Johan Hedberg 提交于
This reverts commit 04530982. That commit is wrong for two reasons: - The conn->sec_level shouldn't be updated without performing authentication first (as it's supposed to represent the level of security that the existing connection has) - A higher auth_type value doesn't mean "more secure" like the commit seems to assume. E.g. dedicated bonding with MITM protection is 0x03 whereas general bonding without MITM protection is 0x04. hci_conn_auth already takes care of updating conn->auth_type so hci_connect doesn't need to do it. Signed-off-by: NJohan Hedberg <johan.hedberg@nokia.com> Acked-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
由 Lukáš Turek 提交于
Fix a bug introduced in commit 9cf5b0ea: function rfcomm_recv_ua calls rfcomm_session_put without checking that the session is not referenced by some DLC. If the session is freed, that DLC would refer to deallocated memory, causing an oops later, as shown in this bug report: https://bugzilla.kernel.org/show_bug.cgi?id=15994Signed-off-by: NLukas Turek <8an@praha12.net> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
由 Johan Hedberg 提交于
The blacklist should be freed before the hci device gets unregistered. Signed-off-by: NJohan Hedberg <johan.hedberg@nokia.com> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
由 David Sterba 提交于
CC: Marcel Holtmann <marcel@holtmann.org> CC: "Gustavo F. Padovan" <padovan@profusion.mobi> CC: João Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NDavid Sterba <dsterba@suse.cz> Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi>
-
- 14 1月, 2011 2 次提交
-
-
由 Luciano Coelho 提交于
When the buffer size is set to zero in the block ack parameter set field, we should use the maximum supported number of subframes. The existing code was bogus and was doing some unnecessary calculations that lead to wrong values. Thanks Johannes for helping me figure this one out. Cc: stable@kernel.org Cc: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: NLuciano Coelho <coelho@ti.com> Reviewed-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
Since the introduction of the fixes for the reorder timer, mac80211 will cause lockdep warnings because lockdep confuses local->skb_queue and local->rx_skb_queue and treats their lock as the same. However, their locks are different, and are valid in different contexts (the former is used in IRQ context, the latter in BH only) and the only thing to be done is mark the former as a different lock class so that lockdep can tell the difference. Reported-by: NLarry Finger <Larry.Finger@lwfinger.net> Reported-by: NSujith <m.sujith@gmail.com> Reported-by: NMiles Lane <miles.lane@gmail.com> Tested-by: NSujith <m.sujith@gmail.com> Tested-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 10 1月, 2011 8 次提交
-
-
由 Jesse Gross 提交于
In order to compute the features for other offloads (primarily scatter/gather), we need to first check the ability of the NIC to offload the checksum for the packet. Since we have already computed this, we can directly use the result instead of figuring it out again. Signed-off-by: NJesse Gross <jesse@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesse Gross 提交于
This switches skb_need_linearize() to use the features that have been centrally computed. In doing so, this fixes a problem where scatter/gather should not be used because the card does not support checksum offloading on that type of packet. On device registration we only check that some form of checksum offloading is available if scatter/gatther is enabled but we must also check at transmission time. Examples of this include IPv6 or vlan packets on a NIC that only supports IPv4 offloading. Signed-off-by: NJesse Gross <jesse@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesse Gross 提交于
This switches dev_gso_segment() to use the device features computed by the centralized routine. In doing so, it fixes a problem where it would always use dev->features, instead of those appropriate to the number of vlan tags if any are present. Signed-off-by: NJesse Gross <jesse@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesse Gross 提交于
Now that there is a single function that can compute the device features relevant to a packet, we don't want to run it for each offload. This converts netif_needs_gso() to take the features of the device, rather than computing them itself. Signed-off-by: NJesse Gross <jesse@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesse Gross 提交于
netif_get_vlan_features() is currently only used by netif_needs_gso(), so it only concerns itself with GSO features. However, several other places also should take into account the contents of the packet when deciding whether to offload to hardware. This generalizes the function to return features about all of the various forms of offloading. Since offloads tend to be linked together, this avoids duplicating the logic in each location (i.e. the scatter/gather code also needs the checksum logic). Suggested-by: NMichał Mirosław <mirqus@gmail.com> Signed-off-by: NJesse Gross <jesse@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jesse Gross 提交于
We currently only have software fallback for one type of checksum: the TCP/UDP one's complement. This means that a protocol that uses hardware offloading for a different type of checksum (FCoE, SCTP) must directly check the device's features and do the right thing ahead of time. By the time we get to dev_can_checksum(), we're only deciding whether to apply the one algorithm in software or hardware. NETIF_F_HW_CSUM has the same capabilities as the software version, so we should always use it if present. The primary advantage of this is multiply tagged vlans can use hardware checksumming. Signed-off-by: NJesse Gross <jesse@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Randy Dunlap 提交于
Fix new kernel-doc notation warning in net/core/filter.c: Warning(net/core/filter.c:172): No description found for parameter 'fentry' Warning(net/core/filter.c:172): Excess function parameter 'filter' description in 'sk_run_filter' Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jan Engelhardt 提交于
Due to NLM_F_DUMP is composed of two bits, NLM_F_ROOT | NLM_F_MATCH, when doing "if (x & NLM_F_DUMP)", it tests for _either_ of the bits being set. Because NLM_F_MATCH's value overlaps with NLM_F_EXCL, non-dump requests with NLM_F_EXCL set are mistaken as dump requests. Substitute the condition to test for _all_ bits being set. Signed-off-by: NJan Engelhardt <jengelh@medozas.de> Acked-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 1月, 2011 16 次提交
-
-
由 Gerrit Renker 提交于
The 'seq_window' sysctl sets the initial value for the DCCP Sequence Window, which may range from 32..2^46-1 (RFC 4340, 7.5.2). The patch sets the upper bound consistently to 2^32-1 on both 32 and 64 bit systems, which should be sufficient - with a RTT of 1sec and 1-byte packets, a seq_window of 2^32-1 corresponds to a link speed of 34 Gbps. Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
-
由 Samuel Jero 提交于
Currently dccp_check_seqno allows any valid packet to update the Greatest Sequence Number Received, even if that packet's sequence number is less than the current GSR. This patch adds a check to make sure that the new packet's sequence number is greater than GSR. Signed-off-by: NSamuel Jero <sj323707@ohio.edu> Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
-
由 Samuel Jero 提交于
Currently dccp_check_seqno returns 0 (indicating a valid packet) if the acknowledgment number is out of bounds and the sync that RFC 4340 mandates at this point is currently being rate-limited. This function should return -1, indicating an invalid packet. Signed-off-by: NSamuel Jero <sj323707@ohio.edu> Acked-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
-
由 Nick Piggin 提交于
The problem that this patch aims to fix is vfsmount refcounting scalability. We need to take a reference on the vfsmount for every successful path lookup, which often go to the same mount point. The fundamental difficulty is that a "simple" reference count can never be made scalable, because any time a reference is dropped, we must check whether that was the last reference. To do that requires communication with all other CPUs that may have taken a reference count. We can make refcounts more scalable in a couple of ways, involving keeping distributed counters, and checking for the global-zero condition less frequently. - check the global sum once every interval (this will delay zero detection for some interval, so it's probably a showstopper for vfsmounts). - keep a local count and only taking the global sum when local reaches 0 (this is difficult for vfsmounts, because we can't hold preempt off for the life of a reference, so a counter would need to be per-thread or tied strongly to a particular CPU which requires more locking). - keep a local difference of increments and decrements, which allows us to sum the total difference and hence find the refcount when summing all CPUs. Then, keep a single integer "long" refcount for slow and long lasting references, and only take the global sum of local counters when the long refcount is 0. This last scheme is what I implemented here. Attached mounts and process root and working directory references are "long" references, and everything else is a short reference. This allows scalable vfsmount references during path walking over mounted subtrees and unattached (lazy umounted) mounts with processes still running in them. This results in one fewer atomic op in the fastpath: mntget is now just a per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock and non-atomic decrement in the common case. However code is otherwise bigger and heavier, so single threaded performance is basically a wash. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Regardless of how much we possibly try to scale dcache, there is likely always going to be some fundamental contention when adding or removing children under the same parent. Pseudo filesystems do not seem need to have connected dentries because by definition they are disconnected. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Reduce some branches and memory accesses in dcache lookup by adding dentry flags to indicate common d_ops are set, rather than having to check them. This saves a pointer memory access (dentry->d_op) in common path lookup situations, and saves another pointer load and branch in cases where we have d_op but not the particular operation. Patched with: git grep -E '[.>]([[:space:]])*d_op([[:space:]])*=' | xargs sed -e 's/\([^\t ]*\)->d_op = \(.*\);/d_set_d_op(\1, \2);/' -e 's/\([^\t ]*\)\.d_op = \(.*\);/d_set_d_op(\&\1, \2);/' -i Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Pseudo filesystems that don't put inode on RCU list or reachable by rcu-walk dentries do not need to RCU free their inodes. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
RCU free the struct inode. This will allow: - Subsequent store-free path walking patch. The inode must be consulted for permissions when walking, so an RCU inode reference is a must. - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want to take i_lock no longer need to take sb_inode_list_lock to walk the list in the first place. This will simplify and optimize locking. - Could remove some nested trylock loops in dcache code - Could potentially simplify things a bit in VM land. Do not need to take the page lock to follow page->mapping. The downsides of this is the performance cost of using RCU. In a simple creat/unlink microbenchmark, performance drops by about 10% due to inability to reuse cache-hot slab objects. As iterations increase and RCU freeing starts kicking over, this increases to about 20%. In cases where inode lifetimes are longer (ie. many inodes may be allocated during the average life span of a single inode), a lot of this cache reuse is not applicable, so the regression caused by this patch is smaller. The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU, however this adds some complexity to list walking and store-free path walking, so I prefer to implement this at a later date, if it is shown to be a win in real situations. I haven't found a regression in any non-micro benchmark so I doubt it will be a problem. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Nick Piggin 提交于
Change d_delete from a dentry deletion notification to a dentry caching advise, more like ->drop_inode. Require it to be constant and idempotent, and not take d_lock. This is how all existing filesystems use the callback anyway. This makes fine grained dentry locking of dput and dentry lru scanning much simpler. Signed-off-by: NNick Piggin <npiggin@kernel.dk>
-
由 Changli Gao 提交于
Since nf_bridge_maybe_copy_header() may change the length of skb, we should check the length of skb after it to handle the ppoe skbs. Signed-off-by: NChangli Gao <xiaosuo@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Pablo Neira Ayuso 提交于
In 1ae4de0c, the secctx was exported via the /proc/net/netfilter/nf_conntrack and ctnetlink interfaces instead of the secmark. That patch introduced the use of security_secid_to_secctx() which may return a non-zero value on error. In one of my setups, I have NF_CONNTRACK_SECMARK enabled but no security modules. Thus, security_secid_to_secctx() returns a negative value that results in the breakage of the /proc and `conntrack -L' outputs. To fix this, we skip the inclusion of secctx if the aforementioned function fails. This patch also fixes the dynamic netlink message size calculation if security_secid_to_secctx() returns an error, since its logic is also wrong. This problem exists in Linux kernel >= 2.6.37. Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Changli Gao 提交于
Since nf_ct_expect_dst_hash() may be called without nf_conntrack_lock locked, nf_ct_expect_hash_rnd should be initialized in the atomic way. In this patch, we use nf_conntrack_hash_rnd instead of nf_ct_expect_hash_rnd. Signed-off-by: NChangli Gao <xiaosuo@gmail.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
RFC3168 (The Addition of Explicit Congestion Notification to IP) states : 5.3. Fragmentation ECN-capable packets MAY have the DF (Don't Fragment) bit set. Reassembly of a fragmented packet MUST NOT lose indications of congestion. In other words, if any fragment of an IP packet to be reassembled has the CE codepoint set, then one of two actions MUST be taken: * Set the CE codepoint on the reassembled packet. However, this MUST NOT occur if any of the other fragments contributing to this reassembly carries the Not-ECT codepoint. * The packet is dropped, instead of being reassembled, for any other reason. This patch implements this requirement for IPv4, choosing the first action : If one fragment had NO-ECT codepoint reassembled frame has NO-ECT ElIf one fragment had CE codepoint reassembled frame has CE Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
The original code has a use after free bug because it's not using the _safe() version of the list_for_each_entry() macro. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
There is a "goto nla_put_failure" hidden inside the NLA_PUT() macro, but we're holding the dcb_lock so we need to unlock first. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Leonardo Chiquitto found poll() could block forever on tcp sockets and Urgent data was received, if the event flag only contains POLLPRI. He did a bisection and found commit 4938d7e0 (poll: avoid extra wakeups in select/poll) was the source of the problem. Problem is TCP sockets use standard sock_def_readable() function for their sk_data_ready() handler, and sock_def_readable() doesnt signal POLLPRI. Only TCP is affected by the problem. Adding POLLPRI to the list of flags might trigger unnecessary schedules, but URGENT handling is such a seldom used feature this seems a good compromise. Thanks a lot to Leonardo for providing the bisection result and a test program as well. Reference : http://www.spinics.net/lists/netdev/msg151793.htmlReported-and-bisected-by: NLeonardo Chiquitto <leonardo.lists@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Tested-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 1月, 2011 3 次提交
-
-
由 David S. Miller 提交于
unix_release() can asynchornously set socket->sk to NULL, and it does so without holding the unix_state_lock() on "other" during stream connects. However, the reverse mapping, sk->sk_socket, is only transitioned to NULL under the unix_state_lock(). Therefore make the security hooks follow the reverse mapping instead of the forward mapping. Reported-by: NJeremy Fitzhardinge <jeremy@goop.org> Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
commit 57dbb2d8 (sched: add head drop fifo queue) introduced pfifo_head_drop, and broke the invariant that sch->bstats.bytes and sch->bstats.packets are COUNTER (increasing counters only) This can break estimators because est_timer() handles unsigned deltas only. A decreasing counter can then give a huge unsigned delta. My mid term suggestion would be to change things so that sch->bstats.bytes and sch->bstats.packets are incremented in dequeue() only, not at enqueue() time. We also could add drop_bytes/drop_packets and provide estimations of drop rates. It would be more sensible anyway for very low speeds, and big bursts. Right now, if we drop packets, they still are accounted in byte/packets abolute counters and rate estimators. Before this mid term change, this patch makes pfifo_head_drop behavior similar to other qdiscs in case of drops : Dont decrement sch->bstats.bytes and sch->bstats.packets Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Acked-by: NHagen Paul Pfeifer <hagen@jauu.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
Somehow this snuck into my earlier patch, and only now did I see a compiler warning: net/mac80211/led.c:218:13: warning: function '__ieee80211_create_tpt_led_trigger' with external linkage has definition Remove the stray extern. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-