- 29 9月, 2011 25 次提交
-
-
由 Paul E. McKenney 提交于
The rcu_dereference_bh_protected() and rcu_dereference_sched_protected() macros are synonyms for rcu_dereference_protected() and are not used anywhere in mainline. This commit therefore removes them. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Add documentation for rcu_dereference_bh_check(), rcu_dereference_sched_check(), srcu_dereference_check(), and rcu_dereference_index_check(). Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Michal Hocko 提交于
Since ca5ecddf (rcu: define __rcu address space modifier for sparse) rcu_dereference_check() use rcu_read_lock_held() as a part of condition automatically. Therefore, callers of rcu_dereference_check() no longer need to pass rcu_read_lock_held() to rcu_dereference_check(). Signed-off-by: NMichal Hocko <mhocko@suse.cz> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
There is often a delay between the time that a CPU passes through a quiescent state and the time that this quiescent state is reported to the RCU core. It is quite possible that the grace period ended before the quiescent state could be reported, for example, some other CPU might have deduced that this CPU passed through dyntick-idle mode. It is critically important that quiescent state be counted only against the grace period that was in effect at the time that the quiescent state was detected. Previously, this was handled by recording the number of the last grace period to complete when passing through a quiescent state. The RCU core then checks this number against the current value, and rejects the quiescent state if there is a mismatch. However, one additional possibility must be accounted for, namely that the quiescent state was recorded after the prior grace period completed but before the current grace period started. In this case, the RCU core must reject the quiescent state, but the recorded number will match. This is handled when the CPU becomes aware of a new grace period -- at that point, it invalidates any prior quiescent state. This works, but is a bit indirect. The new approach records the current grace period, and the RCU core checks to see (1) that this is still the current grace period and (2) that this grace period has not yet ended. This approach simplifies reasoning about correctness, and this commit changes over to this new approach. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Add trace events to record grace-period start and end, quiescent states, CPUs noticing grace-period start and end, grace-period initialization, call_rcu() invocation, tasks blocking in RCU read-side critical sections, tasks exiting those same critical sections, force_quiescent_state() detection of dyntick-idle and offline CPUs, CPUs entering and leaving dyntick-idle mode (except from NMIs), CPUs coming online and going offline, and CPUs being kicked for staying in dyntick-idle mode for too long (as in many weeks, even on 32-bit systems). Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> rcu: Add the rcu flavor to callback trace events The earlier trace events for registering RCU callbacks and for invoking them did not include the RCU flavor (rcu_bh, rcu_preempt, or rcu_sched). This commit adds the RCU flavor to those trace events. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
This patch #ifdefs TINY_RCU kthreads out of the kernel unless RCU_BOOST=y, thus eliminating context-switch overhead if RCU priority boosting has not been configured. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Add event-trace markers to TREE_RCU kthreads to allow including these kthread's CPU time in the utilization calculations. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Andi Kleen noticed that one of the RCU_BOOST data declarations was out of sync with the definition. Move the declarations so that the compiler can do the checking in the future. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
We now have kthreads only for flavors of RCU that support boosting, so update the now-misleading comments accordingly. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Add a string to the rcu_batch_start() and rcu_batch_end() trace messages that indicates the RCU type ("rcu_sched", "rcu_bh", or "rcu_preempt"). The trace messages for the actual invocations themselves are not marked, as it should be clear from the rcu_batch_start() and rcu_batch_end() events before and after. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
In order to allow event tracing to distinguish between flavors of RCU, we need those names in the relevant RCU data structures. TINY_RCU has avoided them for memory-footprint reasons, so add them only if CONFIG_RCU_TRACE=y. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
This commit adds the trace_rcu_utilization() marker that is to be used to allow postprocessing scripts compute RCU's CPU utilization, give or take event-trace overhead. Note that we do not include RCU's dyntick-idle interface because event tracing requires RCU protection, which is not available in dyntick-idle mode. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
There was recently some controversy about the overhead of invoking RCU callbacks. Add TRACE_EVENT()s to obtain fine-grained timings for the start and stop of a batch of callbacks and also for each callback invoked. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
The rcu_torture_boost() cleanup code destroyed debug-objects state before waiting for the last RCU callback to be invoked, resulting in rare but very real debug-objects warnings. Move the destruction to after the waiting to fix this problem. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
This commit eliminates the possibility of running TREE_PREEMPT_RCU when SMP=n and of running TINY_RCU when PREEMPT=y. People who really want these combinations can hand-edit init/Kconfig, but eliminating them as choices for production systems reduces the amount of testing required. It will also allow cutting out a few #ifdefs. Note that running TREE_RCU and TINY_RCU on single-CPU systems using SMP-built kernels is still supported. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
It has long been the case that the architecture must call nmi_enter() and nmi_exit() rather than irq_enter() and irq_exit() in order to permit RCU read-side critical sections in NMIs. Catch the documentation up with reality. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
-
由 Paul E. McKenney 提交于
Now that the RCU API contains synchronize_rcu_bh(), synchronize_sched(), call_rcu_sched(), and rcu_bh_expedited()... Make rcutorture test synchronize_rcu_bh(), getting rid of the old rcu_bh_torture_synchronize() workaround. Similarly, make rcutorture test synchronize_sched(), getting rid of the old sched_torture_synchronize() workaround. Make rcutorture test call_rcu_sched() instead of wrappering synchronize_sched(). Also add testing of rcu_bh_expedited(). Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Pull the code that waits for an RCU grace period into a single function, which is then called by synchronize_rcu() and friends in the case of TREE_RCU and TREE_PREEMPT_RCU, and from rcu_barrier() and friends in the case of TINY_RCU and TINY_PREEMPT_RCU. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Andi Kleen 提交于
rcutree.c defines rcu_cpu_kthread_cpu as int, not unsigned int, so the extern has to follow that. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Update rcutorture documentation to account for boosting, new types of RCU torture testing that have been added over the past few years, and the memory-barrier testing that was added an embarrassingly long time ago. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Take a first step towards untangling Linux kernel header files by placing the struct rcu_head definition into include/linux/types.h and including include/linux/types.h in include/linux/rcupdate.h where struct rcu_head used to be defined. The actual inclusion point for include/linux/types.h is with the rest of the #include directives rather than at the point where struct rcu_head used to be defined, as suggested by Mathieu Desnoyers. Once this is in place, then header files that need only rcu_head can include types.h rather than rcupdate.h. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
-
由 Paul E. McKenney 提交于
Long ago, using TREE_RCU with PREEMPT would result in "scheduling while atomic" diagnostics if you blocked in an RCU read-side critical section. However, PREEMPT now implies TREE_PREEMPT_RCU, which defeats this diagnostic. This commit therefore adds a replacement diagnostic based on PROVE_RCU. Because rcu_lockdep_assert() and lockdep_rcu_dereference() are now being used for things that have nothing to do with rcu_dereference(), rename lockdep_rcu_dereference() to lockdep_rcu_suspicious() and add a third argument that is a string indicating what is suspicious. This third argument is passed in from a new third argument to rcu_lockdep_assert(). Update all calls to rcu_lockdep_assert() to add an informative third argument. Also, add a pair of rcu_lockdep_assert() calls from within rcu_note_context_switch(), one complaining if a context switch occurs in an RCU-bh read-side critical section and another complaining if a context switch occurs in an RCU-sched read-side critical section. These are present only if the PROVE_RCU kernel parameter is enabled. Finally, fix some checkpatch whitespace complaints in lockdep.c. Again, you must enable PROVE_RCU to see these new diagnostics. But you are enabling PROVE_RCU to check out new RCU uses in any case, aren't you? Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
Call out the RCU_TRACE information that is provided only in kernels built with RCU_BOOST. Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Shaohua Li 提交于
There are a number of cases where the RCU can find additional work for the per-CPU kthread within the context of that per-CPU kthread. In such cases, the per-CPU kthread is already running, so attempting to wake itself up does nothing except waste CPU cycles. This commit therefore checks to see if it is in the per-CPU kthread context, omitting the wakeup in this case. Signed-off-by: NShaohua Li <shaohua.li@intel.com> Signed-off-by: NPaul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Eric Dumazet 提交于
Commit a26ac245 (move TREE_RCU from softirq to kthread) added per-CPU kthreads. However, kthread creation uses kthread_create(), which can put the kthread's stack and task struct on the wrong NUMA node. Therefore, use kthread_create_on_node() instead of kthread_create() so that the stacks and task structs are placed on the correct NUMA node. A similar change was carried out in commit 94dcf29a (kthread: use kthread_create_on_node()). Also change rcutorture's priority-boost-test kthread creation. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Tejun Heo <tj@kernel.org> CC: Rusty Russell <rusty@rustcorp.com.au> CC: Andrew Morton <akpm@linux-foundation.org> CC: Andi Kleen <ak@linux.intel.com> CC: Ingo Molnar <mingo@elte.hu> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 28 9月, 2011 4 次提交
-
-
由 Linus Torvalds 提交于
-
git://github.com/tiwai/sound由 Linus Torvalds 提交于
* 'for-linus' of git://github.com/tiwai/sound: ASoC: ssm2602: Re-enable oscillator after suspend ALSA: usb-audio: Check for possible chip NULL pointer before clearing probing flag ALSA: hda/realtek - Don't detect LO jack when identical with HP ALSA: hda/realtek - Avoid bogus HP-pin assignment ALSA: HDA: No power nids on 92HD93 ASoC: omap-mcbsp: Do not attempt to change DAI sysclk if stream is active
-
git://github.com/rjwysocki/linux-pm由 Linus Torvalds 提交于
* 'pm-fixes' of git://github.com/rjwysocki/linux-pm: PM / Clocks: Do not acquire a mutex under a spinlock
-
由 Takashi Iwai 提交于
-
- 27 9月, 2011 11 次提交
-
-
由 Linus Torvalds 提交于
That flag no longer makes sense, since we don't look up automount points as eagerly any more. Additionally, it turns out that the NO_AUTOMOUNT handling was buggy to begin with: it would avoid automounting even for cases where we really *needed* to do the automount handling, and could return ENOENT for autofs entries that hadn't been instantiated yet. With our new non-eager automount semantics, one discussion has been about adding a AT_AUTOMOUNT flag to vfs_fstatat (and thus the newfstatat() and fstatat64() system calls), but it's probably not worth it: you can always force at least directory automounting by simply adding the final '/' to the filename, which works for *all* of the stat family system calls, old and new. So AT_NO_AUTOMOUNT (and thus LOOKUP_NO_AUTOMOUNT) really were just a result of our bad default behavior. Acked-by: NIan Kent <raven@themaw.net> Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Lars-Peter Clausen 提交于
Currently the the internal oscillator is powered down when entering BIAS_OFF state, but not re-enabled when going back to BIAS_STANDBY. As a result the CODEC will stop working after suspend if the internal oscillator is used to generate the sysclock signal. This patch fixes it by clearing the appropriate bit in the power down register when the CODEC is re-enabled. Signed-off-by: NLars-Peter Clausen <lars@metafoo.de> Signed-off-by: NMark Brown <broonie@opensource.wolfsonmicro.com> Cc: stable@kernel.org
-
由 Trond Myklebust 提交于
The concensus seems to be that system calls such as stat() etc should not trigger an automount. Neither should the l* versions. This patch therefore adds a LOOKUP_AUTOMOUNT flag to tag those lookups that _should_ trigger an automount on the last path element. Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com> [ Edited to leave out the cases that are already covered by LOOKUP_OPEN, LOOKUP_DIRECTORY and LOOKUP_CREATE - all of which also fundamentally force automounting for their own reasons - Linus ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
Since we've now turned around and made LOOKUP_FOLLOW *not* force an automount, we want to add the ability to force an automount event on lookup even if we don't happen to have one of the other flags that force it implicitly (LOOKUP_OPEN, LOOKUP_DIRECTORY, LOOKUP_PARENT..) Most cases will never want to use this, since you'd normally want to delay automounting as long as possible, which usually implies LOOKUP_OPEN (when we open a file or directory, we really cannot avoid the automount any more). But Trond argued sufficiently forcefully that at a minimum bind mounting a file and quotactl will want to force the automount lookup. Some other cases (like nfs_follow_remote_path()) could use it too, although LOOKUP_DIRECTORY would work there as well. This commit just adds the flag and logic, no users yet, though. It also doesn't actually touch the LOOKUP_NO_AUTOMOUNT flag that is related, and was made irrelevant by the same change that made us not follow on LOOKUP_FOLLOW. Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Ian Kent <raven@themaw.net> Cc: Jeff Layton <jlayton@redhat.com> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: David Howells <dhowells@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Greg KH <gregkh@suse.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
git://github.com/kgene/linux-samsung由 Linus Torvalds 提交于
* 'samsung-fixes-3' of git://github.com/kgene/linux-samsung: ARM: EXYNOS4: Rename sclk_cam clocks for FIMC driver ARM: S5PV210: Rename sclk_cam clocks for FIMC media driver ARM: S5P: fix incorrect loop iterator usage on gpio-interrupt ARM: S3C2443: Fix bit-reset in setrate of clk_armdiv
-
由 Sylwester Nawrocki 提交于
The sclk_cam clocks are now controlled by the top level FIMC media device driver bound to "s5p-fimc-md" platform device. Rename sclk_cam clocks so they accessible by the corresponding driver. Signed-off-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NKukjin Kim <kgene.kim@samsung.com>
-
由 Sylwester Nawrocki 提交于
The sclk_cam clocks are now controlled by the top level FIMC media device driver bound to "s5p-fimc-md" platform device. Rename sclk_cam clocks so they accessible by the corresponding driver. Signed-off-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NKukjin Kim <kgene.kim@samsung.com>
-
git://github.com/groeck/linux由 Linus Torvalds 提交于
* 'hwmon-for-linus' of git://github.com/groeck/linux: hwmon: (coretemp) remove struct platform_data * parameter from create_core_data() hwmon: (coretemp) constify static data hwmon: (coretemp) don't use kernel assigned CPU number as platform device ID hwmon: (ds620) Fix handling of negative temperatures hwmon: (w83791d) rename prototype parameter from 'register' to 'reg' hwmon: (coretemp) Don't use threshold registers for tempX_max hwmon: (coretemp) Let the user force TjMax hwmon: (coretemp) Drop duplicate function get_pkg_tjmax
-
git://github.com/avikivity/kvm由 Linus Torvalds 提交于
* 'kvm-updates/3.1' of git://github.com/avikivity/kvm: KVM: x86 emulator: fix Src2CL decode KVM: MMU: fix incorrect return of spte
-
http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm由 Linus Torvalds 提交于
* 'fixes' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: ARM: 7099/1: futex: preserve oldval in SMP __futex_atomic_op ARM: dma-mapping: free allocated page if unable to map ARM: fix vmlinux.lds.S discarding sections ARM: nommu: fix warning with checksyscalls.sh ARM: 7091/1: errata: D-cache line maintenance operation by MVA may not succeed
-
由 Rafael J. Wysocki 提交于
Commit b7ab83ed (PM: Use spinlock instead of mutex in clock management functions) introduced a regression causing clocks_mutex to be acquired under a spinlock. This happens because pm_clk_suspend() and pm_clk_resume() call pm_clk_acquire() under pcd->lock, but pm_clk_acquire() executes clk_get() which causes clocks_mutex to be acquired. Similarly, __pm_clk_remove(), executed under pcd->lock, calls clk_put(), which also causes clocks_mutex to be acquired. To fix those problems make pm_clk_add() call pm_clk_acquire(), so that pm_clk_suspend() and pm_clk_resume() don't have to do that. Change pm_clk_remove() and pm_clk_destroy() to separate modifications of the pcd->clock_list list from the actual removal of PM clock entry objects done by __pm_clk_remove(). Reported-and-tested-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-