- 04 8月, 2010 7 次提交
-
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Now that rpc_run_task() is the sole entry point for RPC calls, we can move the remaining rpc_client-related initialisation of struct rpc_task from sched.c into clnt.c. Also move rpc_killall_tasks() into the same file, since that too is relative to the rpc_clnt. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Make rpc_exit() non-inline, and ensure that it always wakes up a task that has been queued. Kill off the now unused rpc_wake_up_task(). Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
This patch allows the user to configure the credential cache hashtable size using a new module parameter: auth_hashtable_size When set, this parameter will be rounded up to the nearest power of two, with a maximum allowed value of 1024 elements. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Cleanup in preparation for allowing the user to determine the maximum hash table size. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
Both rpc_restart_call_prepare() and rpc_restart_call() test for the RPC_TASK_KILLED flag, and fail to restart the RPC call if that flag is set. This patch allows callers to know whether or not the restart was successful, so that they can perform cleanups etc in case of failure. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
- 23 6月, 2010 1 次提交
-
-
由 Trond Myklebust 提交于
If the attempt to read the calldir fails, then instead of storing the read bytes, we currently discard them. This leads to a garbage final result when upon re-entry to the same routine, we read the remaining bytes. Fixes the regression in bugzilla number 16213. Please see https://bugzilla.kernel.org/show_bug.cgi?id=16213Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Cc: stable@kernel.org
-
- 11 6月, 2010 3 次提交
-
-
由 Daniel Turull 提交于
This patch correct a bug in the delay of pktgen. It makes sure the inter-packet interval is accurate. Signed-off-by: NDaniel Turull <daniel.turull@gmail.com> Signed-off-by: NRobert Olsson <robert.olsson@its.uu.se> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
gen_kill_estimator() / gen_new_estimator() is not always called with RTNL held. net/netfilter/xt_RATEEST.c is one user of these API that do not hold RTNL, so random corruptions can occur between "tc" and "iptables". Add a new fine grained lock instead of trying to use RTNL in netfilter. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
Currently, the accelerated receive path for VLAN's will drop packets if the real device is an inactive slave and is not one of the special pkts tested for in skb_bond_should_drop(). This behavior is different then the non-accelerated path and for pkts over a bonded vlan. For example, vlanx -> bond0 -> ethx will be dropped in the vlan path and not delivered to any packet handlers at all. However, bond0 -> vlanx -> ethx and bond0 -> ethx will be delivered to handlers that match the exact dev, because the VLAN path checks the real_dev which is not a slave and netif_recv_skb() doesn't drop frames but only delivers them to exact matches. This patch adds a sk_buff flag which is used for tagging skbs that would previously been dropped and allows the skb to continue to skb_netif_recv(). Here we add logic to check for the deliver_no_wcard flag and if it is set only deliver to handlers that match exactly. This makes both paths above consistent and gives pkt handlers a way to identify skbs that come from inactive slaves. Without this patch in some configurations skbs will be delivered to handlers with exact matches and in others be dropped out right in the vlan path. I have tested the following 4 configurations in failover modes and load balancing modes. # bond0 -> ethx # vlanx -> bond0 -> ethx # bond0 -> vlanx -> ethx # bond0 -> ethx | vlanx -> -- Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 6月, 2010 3 次提交
-
-
由 Eric Dumazet 提交于
In commit 1f8438a8 (icmp: Account for ICMP out errors), I did a typo on IPV6 side, using ICMP6_MIB_OUTMSGS instead of ICMP6_MIB_OUTERRORS Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
The extra ! character means that these conditions are always false. Signed-off-by: NDan Carpenter <error27@gmail.com> Acked-by: NSjur Braendeland <sjur.brandeland@stericsson.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tim Gardner 提交于
BugLink: http://bugs.launchpad.net/bugs/591416 There are a number of network drivers (bridge, bonding, etc) that are not yet receive multi-queue enabled and use alloc_netdev(), so don't print a num_rx_queues imbalance warning in that case. Also, only print the warning once for those drivers that _are_ multi-queue enabled. Signed-off-by: NTim Gardner <tim.gardner@canonical.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
-
- 09 6月, 2010 1 次提交
-
-
由 Johannes Berg 提交于
When we receive a deauthentication frame before having successfully associated, we neither print a message nor abort assocation. The former makes it hard to debug, while the latter later causes a warning in cfg80211 when, as will typically be the case, association timed out. This warning was reported by many, e.g. in https://bugzilla.kernel.org/show_bug.cgi?id=15981, but I couldn't initially pinpoint it. I verified the fix by hacking hostapd to send a deauth frame instead of an association response. Cc: stable@kernel.org Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Tested-by: NMiles Lane <miles.lane@gmail.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 08 6月, 2010 1 次提交
-
-
由 Holger Schurig 提交于
This makes "iw wlan0 dump survey" work again with mac80211-based drivers that support it, e.g. ath5k. Signed-off-by: NHolger Schurig <holgerschurig@gmail.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 07 6月, 2010 2 次提交
-
-
由 Eric Dumazet 提交于
ipmr_rules_exit() and ip6mr_rules_exit() free a list of items, but forget to properly remove these items from list. List head is not changed and still points to freed memory. This can trigger a fault later when icmpv6_sk_exit() is called. Fix is to either reinit list, or use list_del() to properly remove items from list before freeing them. bugzilla report : https://bugzilla.kernel.org/show_bug.cgi?id=16120 Introduced by commit d1db275d (ipv6: ip6mr: support multiple tables) and commit f0ad0860 (ipv4: ipmr: support multiple tables) Reported-by: NAlex Zhavnerchik <alex.vizor@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Patrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 6月, 2010 6 次提交
-
-
由 Eric Dumazet 提交于
With mtu=9000, mld_newpack() use order-2 GFP_ATOMIC allocations, that are very unreliable, on machines where PAGE_SIZE=4K Limit allocated skbs to be at most one page. (order-0 allocations) Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Its better to make a route lookup in appropriate namespace. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
I believe a moderate SYN flood attack can corrupt RFS flow table (rps_sock_flow_table), making RPS/RFS much less effective. Even in a normal situation, server handling short lived sessions suffer from bad steering for the first data packet of a session, if another SYN packet is received for another session. We do following action in tcp_v4_rcv() : sock_rps_save_rxhash(sk, skb->rxhash); We should _not_ do this if sk is a LISTEN socket, as about each packet received on a LISTEN socket has a different rxhash than previous one. -> RPS_NO_CPU markers are spread all over rps_sock_flow_table. Also, it makes sense to protect sk->rxhash field changes with socket lock (We currently can change it even if user thread owns the lock and might use rxhash) This patch moves sock_rps_save_rxhash() to a sock locked section, and only for non LISTEN sockets. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Westphal 提交于
syncookies default to on since e994b7c9 (tcp: Don't make syn cookies initial setting depend on CONFIG_SYSCTL). Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Steffen Klassert 提交于
xfrm triggers a warning if dst_pop() drops a refcount on a noref dst. This patch changes dst_pop() to skb_dst_pop(). skb_dst_pop() drops the refcnt only on a refcounted dst. Also we don't clone the child dst_entry, so it is not refcounted and we can use skb_dst_set_noref() in xfrm_output_one(). Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
Processing an association response could take a bit of time while we set up the hardware etc. During that time, the AP might already send a blockack request. If this happens very quickly on a fairly slow machine, we can end up processing the blockack request before the association processing has finished. Since the blockack processing cannot sleep right now, we also cannot make it wait in the driver. As a result, sometimes on slow machines the iwlagn driver gets totally confused, and no traffic can pass when the aggregation setup was done before the assoc setup completed. I'm working on a proper fix for this, which involves queuing all blockack category action frames from a work struct, and also allowing the ampdu_action driver callback to sleep, which will generally clean up the code and make things easier. However, this is a very involved and complex change. To fix the problem at hand in a way that can also be backported to stable, I've come up with this patch. Here, I simply process all aggregation action frames from the managed interface skb queue, which means their processing will be serialized with processing the association response, thereby fixing the problem. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Cc: stable@kernel.org Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 03 6月, 2010 1 次提交
-
-
由 Changli Gao 提交于
access skb->data safely we should use skb_header_pointer() and skb_store_bits() to access skb->data to handle small or non-linear skbs. Signed-off-by: NChangli Gao <xiaosuo@gmail.com> ---- net/sched/act_pedit.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 6月, 2010 8 次提交
-
-
由 Changli Gao 提交于
use skb_header_pointer() to dereference data safely the original skb->data dereference isn't safe, as there isn't any skb->len or skb_is_nonlinear() check. skb_header_pointer() is used instead in this patch. And when the skb isn't long enough, we terminate the function u32_classify() immediately with -1. Signed-off-by: NChangli Gao <xiaosuo@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniele Lacamera 提交于
For large values of rtt, 2^rho operation may overflow u32. Clamp down the increment to 2^16. Signed-off-by: NDaniele Lacamera <root@danielinux.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Changli Gao 提交于
fix the wrong checksum when addr isn't in old_addr/mask For TCP and UDP packets, when addr isn't in old_addr/mask we don't do SNAT or DNAT, and we should not update layer 4 checksum. Signed-off-by: NChangli Gao <xiaosuo@gmail.com> ---- net/sched/act_nat.c | 4 ++++ 1 file changed, 4 insertions(+) Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
If a skb is received on an inactive bond that does not meet the special cases checked for by skb_bond_should_drop it should only be delivered to exact matches as the comment in netif_receive_skb() says. However because null_or_bond could also be null this is not always true. This patch renames null_or_bond to orig_or_bond and initializes it to orig_dev. This keeps the intent of null_or_bond to pass frames received on VLAN interfaces stacked on bonding interfaces without invalidating the statement for null_or_orig. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
The vlan device should not copy the slave or master flags from the real device. It is not in the bond until added nor is it a master. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Packets going through __xfrm_route_forward() have a not refcounted dst entry, since we enabled a noref forwarding path. xfrm_lookup() might incorrectly release this dst entry. It's a bit late to make invasive changes in xfrm_lookup(), so lets force a refcount in this path. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
The dialog token allocator has apparently been broken since b83f4e15 ("mac80211: fix deadlock in sta->lock") because it got moved out under the spinlock. Fix it. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
Daniel reported that the paged RX changes had broken blockack request frame processing due to using data that wasn't really part of the skb data. Fix this using skb_copy_bits() for the needed data. As a side effect, this adds a check on processing too short frames, which previously this code could do. Reported-by: NDaniel Halperin <dhalperi@cs.washington.edu> Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Acked-by: NDaniel Halperin <dhalperi@cs.washington.edu> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 01 6月, 2010 2 次提交
-
-
由 Joe Perches 提交于
Commit: c720c7e8 missed these. Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Correct sk_forward_alloc handling for error_queue would need to use a backlog of frames that softirq handler could not deliver because socket is owned by user thread. Or extend backlog processing to be able to process normal and error packets. Another possibility is to not use mem charge for error queue, this is what I implemented in this patch. Note: this reverts commit 29030374 (net: fix sk_forward_alloc corruptions), since we dont need to lock socket anymore. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 5月, 2010 2 次提交
-
-
由 Eric Dumazet 提交于
commit f3c5c1bf (netfilter: xtables: make ip_tables reentrant) introduced a performance regression, because stackptr array is shared by all cpus, adding cache line ping pongs. (16 cpus share a 64 bytes cache line) Fix this using alloc_percpu() Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Acked-By: NJan Engelhardt <jengelh@medozas.de> Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
由 Xiaotian Feng 提交于
In xt_register_table, xt_jumpstack_alloc is called first, later xt_replace_table is used. But in xt_replace_table, xt_jumpstack_alloc will be used again. Then the memory allocated by previous xt_jumpstack_alloc will be leaked. We can simply remove the previous xt_jumpstack_alloc because there aren't any users of newinfo between xt_jumpstack_alloc and xt_replace_table. Signed-off-by: NXiaotian Feng <dfeng@redhat.com> Cc: Patrick McHardy <kaber@trash.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jan Engelhardt <jengelh@medozas.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Alexey Dobriyan <adobriyan@gmail.com> Acked-By: NJan Engelhardt <jengelh@medozas.de> Signed-off-by: NPatrick McHardy <kaber@trash.net>
-
- 29 5月, 2010 3 次提交
-
-
由 Eric Dumazet 提交于
As David found out, sock_queue_err_skb() should be called with socket lock hold, or we risk sk_forward_alloc corruption, since we use non atomic operations to update this field. This patch adds bh_lock_sock()/bh_unlock_sock() pair to three spots. (BH already disabled) 1) skb_tstamp_tx() 2) Before calling ip_icmp_error(), in __udp4_lib_err() 3) Before calling ipv6_icmp_error(), in __udp6_lib_err() Reported-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rémi Denis-Courmont 提交于
The accept()'d socket need to be unhashed while the (listen()'ing) socket lock is held. This fixes a race condition that could lead to an OOPS. Signed-off-by: NRémi Denis-Courmont <remi.denis-courmont@nokia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
There was an spin_unlock missing on the error path. The spin_lock was tucked in with the declarations so it was hard to spot. I added a new line. Signed-off-by: NDan Carpenter <error27@gmail.com> Acked-by: NSjur Brændeland <sjurbren@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-