- 05 6月, 2014 1 次提交
-
-
由 Christoph Lameter 提交于
__get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : #define __get_cpu_var(var) (*this_cpu_ptr(&(var))) __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) Signed-off-by: NChristoph Lameter <cl@linux.com> Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> [compilation only] Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/sh uses of the __cpuinit macros from all C files. Currently sh does not have any __CPUINIT used in assembly files. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Paul Mundt <lethal@linux-sh.org> Cc: linux-sh@vger.kernel.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 08 4月, 2013 1 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Reviewed-by: NCc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Magnus Damm <magnus.damm@gmail.com> Cc: Paul Mundt <lethal@linux-sh.org> Link: http://lkml.kernel.org/r/20130321215235.216323644@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 01 6月, 2012 1 次提交
-
-
由 Anton Vorontsov 提交于
Checking for process->mm is not enough because process' main thread may exit or detach its mm via use_mm(), but other threads may still have a valid mm. To fix this we would need to use find_lock_task_mm(), which would walk up all threads and returns an appropriate task (with task lock held). clear_tasks_mm_cpumask() has the issue fixed, so let's use it. Suggested-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NAnton Vorontsov <anton.vorontsov@linaro.org> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 4月, 2012 2 次提交
-
-
由 Thomas Gleixner 提交于
Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: NPaul Mundt <lethal@linux-sh.org> Link: http://lkml.kernel.org/r/20120420124557.855203626@linutronix.de
-
由 Thomas Gleixner 提交于
Preparatory patch to make the idle thread allocation for secondary cpus generic. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Richard Weinberger <richard@nod.at> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20120420124556.964170564@linutronix.de
-
- 30 3月, 2012 1 次提交
-
-
由 Paul Mundt 提交于
Quite a bit of fallout all over the place, nothing terribly exciting. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 29 3月, 2012 1 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for SH. Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: linux-sh@vger.kernel.org
-
- 24 2月, 2012 1 次提交
-
-
由 Rusty Russell 提交于
This has been obsolescent for a while; time for the final push. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Paul Mundt <lethal@linux-sh.org> Cc: linux-sh@vger.kernel.org Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 27 7月, 2011 1 次提交
-
-
由 Arun Sharma 提交于
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: NArun Sharma <asharma@fb.com> Reviewed-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 4月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
For future rework of try_to_wake_up() we'd like to push part of that function onto the CPU the task is actually going to run on. In order to do so we need a generic callback from the existing scheduler IPI. This patch introduces such a generic callback: scheduler_ipi() and implements it as a NOP. BenH notes: PowerPC might use this IPI on offline CPUs under rare conditions! Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Acked-by: NJesper Nilsson <jesper.nilsson@axis.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110405152728.744338123@chello.nl
-
- 29 4月, 2010 1 次提交
-
-
由 Matt Fleming 提交于
arch/sh/kernel/smp.c:164: error: conflicting types for 'native_cpu_disable' /home/matt/src/kernels/sh-2.6/arch/sh/include/asm/smp.h:48: error: previous declaration of 'native_cpu_disable' was here Signed-off-by: NMatt Fleming <matt@console-pimps.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 26 4月, 2010 5 次提交
-
-
由 Paul Mundt 提交于
This adds preliminary support for CPU hotplug for SH SMP systems. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This provides a cache of the secondary CPUs idle loop for the cases where hotplug simply enters a low power state instead of resetting or powering off the core. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
smp_store_cpu_info() is presently flagged as __init, but is called by start_secondary() which is __cpuinit, fix it up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This provides percpu CPU states in preparation for CPU hotplug and the associated notifier chains. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This converts from cpu_set() for the online map to set_cpu_online(). The two online map modifiers were the last remaining manual map manipulation bits, with this in place everything now goes through cpumask. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 21 4月, 2010 1 次提交
-
-
由 Paul Mundt 提交于
This cribs the MIPS plat_smp_ops approach for wrapping up the platform ops. This will allow for mixing and matching different ops on the same platform in the future. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 29 3月, 2010 1 次提交
-
-
由 Matt Fleming 提交于
For the boot, enable_mmu() is called from setup_arch() but we don't call setup_arch() for any of the other cpus. So turn on the non-boot cpu's mmu inside of start_secondary(). I noticed this bug on an SMP board when trying to map I/O memory (smsc911x registers) into the kernel address space. Since the Address Translation bit in MMUCR wasn't set, accessing the virtual address where the smsc911x registers were supposedly mapped actually performed a physical address access. Signed-off-by: NMatt Fleming <matt@console-pimps.org> Cc: stable@kernel.org Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 20 1月, 2010 1 次提交
-
-
由 Paul Mundt 提交于
This provides a machine_ops-based reboot interface loosely cloned from x86, and converts the native sh32 and sh64 cases over to it. Necessary both for tying in SMP support and also enabling platforms like SDK7786 to add support for their microcontroller-based power managers. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 14 10月, 2009 2 次提交
-
-
由 Paul Mundt 提交于
The secondary CPU info was seeing corrupted results due to not entering all of the setup paths taken by the boot CPU. So we just memcpy() the boot cpu data over directly, and then fix up the per-CPU bits. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Secondary CPUs already take care of the D-cache bits through the common cache initialization path, and the only thing that is necessary after twiddling around with stack_start is ensuring that the I-cache changes are visible (particularly since this tends to be the only part lacking coherency). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 14 6月, 2009 2 次提交
-
-
由 Rusty Russell 提交于
Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Rusty Russell 提交于
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 13 12月, 2008 2 次提交
-
-
由 Rusty Russell 提交于
Impact: change calling convention of existing clock_event APIs struct clock_event_timer's cpumask field gets changed to take pointer, as does the ->broadcast function. Another single-patch change. For safety, we BUG_ON() in clockevents_register_device() if it's not set. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu>
-
由 Rusty Russell 提交于
Impact: cleanup Each SMP arch defines these themselves. Move them to a central location. Twists: 1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a CONFIG_INIT_ALL_POSSIBLE for this rather than break them. 2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'. Those archs simply have phys_cpu_present_map replaced everywhere. 3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky so I just manipulate them both in sync. 4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map' declarations. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NGrant Grundler <grundler@parisc-linux.org> Tested-by: NTony Luck <tony.luck@intel.com> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: Mike Travis <travis@sgi.com> Cc: ink@jurassic.park.msu.ru Cc: rmk@arm.linux.org.uk Cc: starvik@axis.com Cc: tony.luck@intel.com Cc: takata@linux-m32r.org Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: paulus@samba.org Cc: schwidefsky@de.ibm.com Cc: lethal@linux-sh.org Cc: wli@holomorphy.com Cc: davem@davemloft.net Cc: jdike@addtoit.com Cc: mingo@redhat.com
-
- 21 10月, 2008 1 次提交
-
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 09 9月, 2008 1 次提交
-
-
由 Manfred Spraul 提交于
Right now, there is no notifier that is called on a new cpu, before the new cpu begins processing interrupts/softirqs. Various kernel function would need that notification, e.g. kvm works around by calling smp_call_function_single(), rcu polls cpu_online_map. The patch adds a CPU_STARTING notification. It also adds a helper function that sends the message to all cpu_chain handlers. Tested on x86-64. All other archs are untested. Especially on sparc, I'm not sure if I got it right. Signed-off-by: NManfred Spraul <manfred@colorfullife.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 08 9月, 2008 3 次提交
-
-
由 Paul Mundt 提交于
This hooks up GENERIC_CLOCKEVENTS_BROADCAST and a dummy local timer, which we call in to from the timer IPI when no other local timer is provided. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This provides a generic smp_message_recv() routine (based on the PPC one), that IPI IRQs can wrap in to. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 26 6月, 2008 3 次提交
-
-
由 Jens Axboe 提交于
It's not even passed on to smp_call_function() anymore, since that was removed. So kill it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
This converts sh to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Not tested, but it compiles. Acked-by: NPaul Mundt <lethal@linux-sh.org> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 21 3月, 2008 1 次提交
-
-
由 Robert P. J. Day 提交于
Signed-off-by: NRobert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 21 9月, 2007 2 次提交
-
-
由 Paul Mundt 提交于
There was a very preliminary bunch of SMP code scattered around for the SH7604 microcontrollers from way back when, and it has mostly suffered bitrot since then. With the tree already having been slowly getting prepped for SMP, this plugs in most of the remaining platform-independent bits. Signed-off-by: NMagnus Damm <damm@igel.co.jp> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This adds the TLB flushing routines for SMP systems, based on the MIPS implementation, with some additional SH-specific flush routines. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 31 5月, 2007 1 次提交
-
-
由 Evgeniy Polyakov 提交于
Several errors were spotted during building for custom config (SMP included). Although SMP still does not compile (no ipi and __smp_call_function) and does not work, this looks a bit cleaner. Some other errors obtained via gcc-4.1.0 build. Signed-off-by: NEvgeniy Polyakov <johnpol@2ka.mipt.ru> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 02 10月, 2006 1 次提交
-
-
由 Greg Banks 提交于
cpumask: ensure that the cpu_online_map and cpu_possible_map bitmasks, and hence all the macros in <linux/cpumask.h> that require them, are available to modules for all supported combinations of architecture and CONFIG_SMP. Signed-off-by: NGreg Banks <gnb@melbourne.sgi.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 01 7月, 2006 1 次提交
-
-
由 Jörn Engel 提交于
Signed-off-by: NJörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: NAdrian Bunk <bunk@stusta.de>
-