- 01 4月, 2011 3 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Nobody uses it, besides we should always use the normal __cpu_up path anyways Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This is used by some "soft" hotplug implementations. I needs to call idle_task_exit() when the CPU is going away, and we remove the now no-longer needed set_cpu_online() and local_irq_enable() which are handled by the return to start_secondary Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Various thing are torn down when a CPU is hot-unplugged. That CPU is expected to go back to start_secondary when re-plugged to re initialize everything, such as clock sources, maps, ... Some implementations just return from cpu_die() callback in the idle loop when the CPU is "re-plugged". This is not enough. We fix it using a little asm trampoline which resets the stack and calls back into start_secondary as if we were all fresh from boot. The trampoline already existed on ppc64, but we add it for ppc32 Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 11月, 2010 1 次提交
-
-
由 Vaidyanathan Srinivasan 提交于
These APIs take logical cpu number as input Change cpu_first_thread_in_core() to cpu_first_thread_sibling() Change cpu_last_thread_in_core() to cpu_last_thread_sibling() These APIs convert core number (index) to logical cpu/thread numbers Add cpu_first_thread_of_core(int core) Changed cpu_thread_to_core() to cpu_core_index_of_thread(int cpu) The goal is to make 'threads_per_core' accessible to the pseries_energy module. Instead of making an API to read threads_per_core, this is a higher level wrapper function to convert from logical cpu number to core number. The current APIs cpu_first_thread_in_core() and cpu_last_thread_in_core() returns logical CPU number while cpu_thread_to_core() returns core number or index which is not a logical CPU number. The new APIs are now clearly named to distinguish 'core number' versus first and last 'logical cpu number' in that core. The new APIs cpu_{first,last}_thread_sibling() work on logical cpu numbers. While cpu_first_thread_of_core() and cpu_core_index_of_thread() work on core index. Example usage: (4 threads per core system) cpu_first_thread_sibling(5) = 4 cpu_last_thread_sibling(5) = 7 cpu_core_index_of_thread(5) = 1 cpu_first_thread_of_core(1) = 4 cpu_core_index_of_thread() is used in cpu_to_drc_index() in the module and cpu_first_thread_of_core() is used in drc_index_to_cpu() in the module. Make API changes to few callers. Export symbols for use in modules. Signed-off-by: NVaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 02 9月, 2010 2 次提交
-
-
由 Paul Mackerras 提交于
Currently, when CONFIG_VIRT_CPU_ACCOUNTING is enabled, we use the PURR register for measuring the user and system time used by processes, as well as other related times such as hardirq and softirq times. This turns out to be quite confusing for users because it means that a program will often be measured as taking less time when run on a multi-threaded processor (SMT2 or SMT4 mode) than it does when run on a single-threaded processor (ST mode), even though the program takes longer to finish. The discrepancy is accounted for as stolen time, which is also confusing, particularly when there are no other partitions running. This changes the accounting to use the timebase instead, meaning that the reported user and system times are the actual number of real-time seconds that the program was executing on the processor thread, regardless of which SMT mode the processor is in. Thus a program will generally show greater user and system times when run on a multi-threaded processor than on a single-threaded processor. On pSeries systems on POWER5 or later processors, we measure the stolen time (time when this partition wasn't running) using the hypervisor dispatch trace log. We check for new entries in the log on every entry from user mode and on every transition from kernel process context to soft or hard IRQ context (i.e. when account_system_vtime() gets called). So that we can correctly distinguish time stolen from user time and time stolen from system time, without having to check the log on every exit to user mode, we store separate timestamps for exit to user mode and entry from user mode. On systems that have a SPURR (POWER6 and POWER7), we read the SPURR in account_system_vtime() (as before), and then apportion the SPURR ticks since the last time we read it between scaled user time and scaled system time according to the relative proportions of user time and system time over the same interval. This avoids having to read the SPURR on every kernel entry and exit. On systems that have PURR but not SPURR (i.e., POWER5), we do the same using the PURR rather than the SPURR. This disables the DTL user interface in /sys/debug/kernel/powerpc/dtl for now since it conflicts with the use of the dispatch trace log by the time accounting code. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Simple cleanup by moving arch_sd_sibling_asym_packing from process.c to smp.c to save an #ifdef CONFIG_SMP No functionality change. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 8月, 2010 1 次提交
-
-
During CPU offline/online tests __cpu_up would flood the logs with the following message: Processor 0 found. This provides no useful information to the user as there is no context provided, and since the operation was a success (to this point) it is expected that the CPU will come back online, providing all the feedback necessary. Change the "Processor found" message to DBG() similar to other such messages in the same function. Also, add an appropriate log level for the "Processor is stuck" message. Signed-off-by: NDarren Hart <dvhltc@us.ibm.com> Acked-by: NWill Schmidt <will_schmidt@vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Nathan Fontenot <nfont@austin.ibm.com> Cc: Robert Jennings <rcj@linux.vnet.ibm.com> Cc: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 31 7月, 2010 1 次提交
-
-
由 Tiejun Chen 提交于
We already defined start_cpu_decrementer() to invoke decrementer for AP as the following path: start_secondary() -> secondary_cpu_time_init() -> start_cpu_decrementer() So remove these incorrect codes introduced from commit: e7f75ad0 powerpc/47x: Base ppc476 support And actually we really should not enable decrementer before calling set_dec(). Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 7月, 2010 1 次提交
-
-
由 Paul Mackerras 提交于
Since the decrementer and timekeeping code was moved over to using the generic clockevents and timekeeping infrastructure, several variables and functions have been obsolete and effectively unused. This deletes them. In particular, wakeup_decrementer() is no longer needed since the generic code reprograms the decrementer as part of the process of resuming the timekeeping code, which happens during sysdev resume. Thus the wakeup_decrementer calls in the suspend_enter methods for 52xx platforms have been removed. The call in the powermac cpu frequency change code has been replaced by set_dec(1), which will cause a timer interrupt as soon as interrupts are enabled, and the generic code will then reprogram the decrementer with the correct value. This also simplifies the generic_suspend_en/disable_irqs functions and makes them static since they are not referenced outside time.c. The preempt_enable/disable calls are removed because the generic code has disabled all but the boot cpu at the point where these functions are called, so we can't be moved to another cpu. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 7月, 2010 1 次提交
-
-
由 Paul Mackerras 提交于
Since the decrementer and timekeeping code was moved over to using the generic clockevents and timekeeping infrastructure, several variables and functions have been obsolete and effectively unused. This deletes them. In particular, wakeup_decrementer() is no longer needed since the generic code reprograms the decrementer as part of the process of resuming the timekeeping code, which happens during sysdev resume. Thus the wakeup_decrementer calls in the suspend_enter methods for 52xx platforms have been removed. The call in the powermac cpu frequency change code has been replaced by set_dec(1), which will cause a timer interrupt as soon as interrupts are enabled, and the generic code will then reprogram the decrementer with the correct value. This also simplifies the generic_suspend_en/disable_irqs functions and makes them static since they are not referenced outside time.c. The preempt_enable/disable calls are removed because the generic code has disabled all but the boot cpu at the point where these functions are called, so we can't be moved to another cpu. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 5月, 2010 1 次提交
-
-
由 Milton Miller 提交于
Configuring a powerpc 32 bit kernel for both SMP and SUSPEND turns on CPU_HOTPLUG to enable disable_nonboot_cpus to be called by the common suspend code. Previously the definition of cpu_die for ppc32 was in the powermac platform code, causing it to be undefined if that platform as not selected. arch/powerpc/kernel/built-in.o: In function 'cpu_idle': arch/powerpc/kernel/idle.c:98: undefined reference to 'cpu_die' Move the code from setup_64 to smp.c and rename the power mac versions to their specific names. Note that this does not setup the cpu_die pointers in either smp_ops (request a given cpu die) or ppc_md (make this cpu die), for other platforms but there are generic versions in smp.c. Reported-by: NMatt Sealey <matt@genesi-usa.com> Reported-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NAnton Vorontsov <avorontsov@mvista.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 06 5月, 2010 4 次提交
-
-
由 Anton Blanchard 提交于
Since the *_map cpumask variants are deprecated, change the comments to instead refer to *_mask. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Dynamically allocate cpu_sibling_map and cpu_core_map cpumasks. We don't need to set_cpu_online() the boot cpu in smp_prepare_boot_cpu, init/main.c does it for us. We also postpone setting of the boot cpu in cpu_sibling_map and cpu_core_map until when the memory allocator is available (smp_prepare_cpus), similar to x86. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Use new cpumask_* functions, and dynamically allocate cpumask in fixup_irqs. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Use the new cpumask_* functions and dynamically allocate the cpumask in smp_cpus_done. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 5月, 2010 1 次提交
-
-
由 Dave Kleikamp 提交于
This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NTorez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 07 4月, 2010 1 次提交
-
-
由 Julia Lawall 提交于
Use set_cpus_allowed_ptr rather than set_cpus_allowed. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression E1,E2; @@ - set_cpus_allowed(E1, cpumask_of_cpu(E2)) + set_cpus_allowed_ptr(E1, cpumask_of(E2)) @@ expression E; identifier I; @@ - set_cpus_allowed(E, I) + set_cpus_allowed_ptr(E, &I) // </smpl> Signed-off-by: NJulia Lawall <julia@diku.dk> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 1月, 2010 1 次提交
-
-
由 Nathan Fontenot 提交于
Move the defintion and lock helper routines for the cpu hotplug driver lock from pseries to powerpc code to avoid build breaks for platforms other than pseries that use cpu hotplug. Signed-off-by: NNathan Fontenot <nfont@austin.ibm.com> Acked-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 12月, 2009 1 次提交
-
-
由 Valentine Barshak 提交于
Remove the CPU from the online map to prevent smp_call_function from sending messages to a stopped CPU. Signed-off-by: NValentine Barshak <vbarshak@ru.mvista.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 10月, 2009 1 次提交
-
-
由 Tejun Heo 提交于
This patch updates percpu related symbols in powerpc such that percpu symbols are unique and don't clash with local symbols. This serves two purposes of decreasing the possibility of global percpu symbol collision and allowing dropping per_cpu__ prefix from percpu symbols. * arch/powerpc/kernel/perf_callchain.c: s/callchain/cpu_perf_callchain/ * arch/powerpc/kernel/setup-common.c: s/pvr/cpu_pvr/ * arch/powerpc/platforms/pseries/dtl.c: s/dtl/cpu_dtl/ * arch/powerpc/platforms/cell/interrupt.c: s/iic/cpu_iic/ Partly based on Rusty Russell's "alloc_percpu: rename percpu vars which cause name clashes" patch. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@ozlabs.org
-
- 24 9月, 2009 2 次提交
-
-
由 Rusty Russell 提交于
Use the accessors rather than frobbing bits directly (the new versions are const). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com>
-
由 Rusty Russell 提交于
We're weaning the core code off handing cpumask's around on-stack. This introduces arch_send_call_function_ipi_mask(), and by defining it, the old arch_send_call_function_ipi is defined by the core code. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 11 9月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
The following commit introduced a compile error since it removed the implementation of smp_85xx_basic_setup: commit 77c0a700 Author: Benjamin Herrenschmidt <benh@kernel.crashing.org> Date: Fri Aug 28 14:25:04 2009 +1000 powerpc: Properly start decrementer on BookE secondary CPUs Make it so that smp_ops probe() and setup_cpu() can be set to NULL. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 8月, 2009 1 次提交
-
-
由 Gautham R Shenoy 提交于
Time time taken for a single cpu online operation on a pseries machine is as follows: Dedicated LPAR (POWER6): ~220ms. Shared LPAR (POWER5) : ~240ms. Of this time, approximately 200ms is taken up by __cpu_up(). This is because we poll every 200ms to check if the new cpu has notified it's presence through the cpu_callin_map. We repeat this operation until the new cpu sets the value in cpu_callin_map or 5 seconds elapse, whichever comes earlier. However, using completion_structs instead of polling loops, the time taken by the new processor to indicate it's presence has found to be less than 1ms on pseries. This method however may not work on all powerpc platforms due to the time-base synchronization code. Keeping this in mind, we could reduce msleep polling interval from 200ms to 1ms while retaining the 5 second timeout. With this, the time taken for a cpu online operation changes as follows: Dedicated LPAR (POWER6): 20-25ms. Shared LPAR (POWER5) : 60-80ms. In both these cases, it was found that the code polls through the loop only once indicating that 1ms is a reasonable value, atleast on pseries. The code needs testing on other powerpc platforms. Signed-off-by: NGautham R Shenoy <ego@in.ibm.com> Acked-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 26 6月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The old PowerSurge SMP (ie, dual or quad 604 machines) code has numerous issues in modern world. One is cpu_possible_map is set too late (the device-tree is bogus) so we fail to allocate the interrupt stacks and crash. Another problem is the fact the timebase is frozen by the bringup of the second CPU so the delays in the generic code will hang, we need to move some of the calling procedure to inside the powermac code. This makes it boot again for me Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 12月, 2008 1 次提交
-
-
由 Nathan Lynch 提交于
The smp code uses cache information to populate cpu_core_map; change it to use common code for cache lookup. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 16 12月, 2008 1 次提交
-
-
由 Nathan Lynch 提交于
smp_hw_index isn't used on 64-bit, so move it from smp.c to setup_32.c. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 13 12月, 2008 1 次提交
-
-
由 Rusty Russell 提交于
Impact: cleanup Each SMP arch defines these themselves. Move them to a central location. Twists: 1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a CONFIG_INIT_ALL_POSSIBLE for this rather than break them. 2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'. Those archs simply have phys_cpu_present_map replaced everywhere. 3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky so I just manipulate them both in sync. 4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map' declarations. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NGrant Grundler <grundler@parisc-linux.org> Tested-by: NTony Luck <tony.luck@intel.com> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: Mike Travis <travis@sgi.com> Cc: ink@jurassic.park.msu.ru Cc: rmk@arm.linux.org.uk Cc: starvik@axis.com Cc: tony.luck@intel.com Cc: takata@linux-m32r.org Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: paulus@samba.org Cc: schwidefsky@de.ibm.com Cc: lethal@linux-sh.org Cc: wli@holomorphy.com Cc: davem@davemloft.net Cc: jdike@addtoit.com Cc: mingo@redhat.com
-
- 19 11月, 2008 1 次提交
-
-
由 Milton Miller 提交于
With the new generic smp call function helpers, I noticed the code in smp_message_recv was a single function call in many cases. While getting the message number from the ipi data is easy, we can reduce the path length by a function and data-dependent switch by registering seperate IPI actions for these simple calls. Originally I left the ipi action array exposed, but then I realized the registration code should be common too. The three users each had their own name array, so I made a fourth to convert all users to use a common one. Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 13 10月, 2008 1 次提交
-
-
由 Milton Miller 提交于
The comment in the code was asking "Do we have to do this?", and according to x86 and s390 the answer is no, the scheduler will do it before calling the arch hook. Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 9月, 2008 1 次提交
-
-
由 Manfred Spraul 提交于
Right now, there is no notifier that is called on a new cpu, before the new cpu begins processing interrupts/softirqs. Various kernel function would need that notification, e.g. kvm works around by calling smp_call_function_single(), rcu polls cpu_online_map. The patch adds a CPU_STARTING notification. It also adds a helper function that sends the message to all cpu_chain handlers. Tested on x86-64. All other archs are untested. Especially on sparc, I'm not sure if I got it right. Signed-off-by: NManfred Spraul <manfred@colorfullife.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 28 7月, 2008 3 次提交
-
-
由 Nathan Lynch 提交于
Existing Open Firmware practice is to report each processor core as a separate node in the device tree. Report the value of the "reg" OF property corresponding to a logical CPU's device node as the core_id attribute in /sys/devices/system/cpu/cpu*/topology/core_id. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nathan Lynch 提交于
Implement the notion of "core siblings" for powerpc. This makes /sys/devices/system/cpu/cpu*/topology/core_siblings present sensible values, indicating online CPUs which share an L2 cache. BenH: Made cpu_to_l2cache() use of_find_node_by_phandle() instead of IBM-specific open coded search Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nathan Lynch 提交于
Rather doing one initialization pass over all the per-cpu cpu_sibling_maps at boot, update the maps at cpu online/offline time. This is a behavior change -- the thread_siblings attribute now reflects only online siblings, whereas it would display offline siblings before. The new behavior matches that of x86, and is arguably more useful. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 26 6月, 2008 2 次提交
-
-
由 Jens Axboe 提交于
It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Jens Axboe 提交于
This converts ppc to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). ppc loses the timeout functionality of smp_call_function_mask() with this change, as the generic code does not provide that. Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 14 5月, 2008 1 次提交
-
-
由 Michael Ellerman 提交于
Make a few things static in lparcfg.c Make init and exit routines static in rtas_flash.c Make things static in rtas_pci.c Make some functions static in rtas.c Make fops static in rtas-proc.c Remove unneeded extern for do_gtod in smp.c Make clocksource_init() static in time.c Make last_tick_len and ticklen_to_xs static in time.c Move the declaration of the pvr per-cpu into smp.h Make kexec_smp_down() and kexec_stack static in machine_kexec_64.c Don't return void in arch_teardown_msi_irqs() in msi.c Move declaration of GregorianDay()into asm/time.h Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 02 5月, 2008 1 次提交
-
-
由 Paul Mackerras 提交于
This fixes a regression reported by Kamalesh Bulabel where a POWER4 machine would crash because of an SLB miss at a point where the SLB miss exception was unrecoverable. This regression is tracked at: http://bugzilla.kernel.org/show_bug.cgi?id=10082 SLB misses at such points shouldn't happen because the kernel stack is the only memory accessed other than things in the first segment of the linear mapping (which is mapped at all times by entry 0 of the SLB). The context switch code ensures that SLB entry 2 covers the kernel stack, if it is not already covered by entry 0. None of entries 0 to 2 are ever replaced by the SLB miss handler. Where this went wrong is that the context switch code assumes it doesn't have to write to SLB entry 2 if the new kernel stack is in the same segment as the old kernel stack, since entry 2 should already be correct. However, when we start up a secondary cpu, it calls slb_initialize, which doesn't set up entry 2. This is correct for the boot cpu, where we will be using a stack in the kernel BSS at this point (i.e. init_thread_union), but not necessarily for secondary cpus, whose initial stack can be allocated anywhere. This doesn't cause any immediate problem since the SLB miss handler will just create an SLB entry somewhere else to cover the initial stack. In fact it's possible for the cpu to go quite a long time without SLB entry 2 being valid. Eventually, though, the entry created by the SLB miss handler will get overwritten by some other entry, and if the next access to the stack is at an unrecoverable point, we get the crash. This fixes the problem by making slb_initialize create a suitable entry for the kernel stack, if we are on a secondary cpu and the stack isn't covered by SLB entry 0. This requires initializing the get_paca()->kstack field earlier, so I do that in smp_create_idle where the current field is initialized. This also abstracts a bit of the computation that mk_esid_data in slb.c does so that it can be used in slb_initialize. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 25 1月, 2008 2 次提交
-
-
由 Olof Johansson 提交于
smp_send_stop() will send an IPI to all other cpus to shut them down. However, for the case of xmon-based reboots (as well as potentially some panics), the other cpus are (or might be) spinning with interrupts off, and won't take the IPI. Current code will drop us into the debugger when the IPI fails, which means we're in an infinite loop that we can't get out of without an external reset of some sort. Instead, make the smp_send_stop() IPI call path just print the warning about being unable to send IPIs, but make it return so the rest of the shutdown sequence can continue. It's not perfect, but the lesser of two evils. Also move the call_lock handling outside of smp_call_function_map so we can avoid deadlocks in smp_send_stop(). Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Olof Johansson 提交于
smp_call_function_map should be static, and for consistency prepend it with __ like other local helper functions in the same file. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-