- 15 11月, 2012 14 次提交
-
-
由 Akinobu Mita 提交于
- Caluculate the bitmap size with BITS_TO_LONGS() - Use bitmap_empty() to verify that all bits are cleared This also includes a printk to pr_warn() conversion. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Fix global symbol name to match actual denorm_exception_hv label. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Just a copy of POWER7 for now. Will update with new code later. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
We are going to reuse this in POWER8 so make the name generic. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
s/intruction/instruction/ Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 K.Prasad 提交于
PPC_PTRACE_GETHWDBGINFO, PPC_PTRACE_SETHWDEBUG and PPC_PTRACE_DELHWDEBUG are PowerPC specific ptrace flags that use the watchpoint register. While they are targeted primarily towards BookE users, user-space applications such as GDB have started using them for BookS too. This patch enables the use of generic hardware breakpoint interfaces for these new flags. Apart from the usual benefits of using generic hw-breakpoint interfaces, these changes allow debuggers (such as GDB) to use a common set of ptrace flags for their watchpoint needs and allow more precise breakpoint specification (length of the variable can be specified). Mikey added: rebased and added dbginfo.features around #ifdef CONFIG_HAVE_HW_BREAKPOINT Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com> Acked-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
The last user of ppc_md.idle_loop() was removed when we dropped the legacy iSeries code, in commit 8ee3e0d6. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Li Zhong 提交于
This patch tries to fix the following BUG report: [ 0.012313] BUG: MAX_STACK_TRACE_ENTRIES too low! [ 0.012318] turning off the locking correctness validator. [ 0.012321] Call Trace: [ 0.012330] [c00000017666f6d0] [c000000000012128] .show_stack+0x78/0x184 (unreliable) [ 0.012339] [c00000017666f780] [c0000000000b6348] .save_trace+0x12c/0x14c [ 0.012345] [c00000017666f800] [c0000000000b7448] .mark_lock+0x2bc/0x710 [ 0.012351] [c00000017666f8b0] [c0000000000bb198] .__lock_acquire+0x748/0xaec [ 0.012357] [c00000017666f9b0] [c0000000000bb684] .lock_acquire+0x148/0x194 [ 0.012365] [c00000017666fa80] [c00000000069371c] .mutex_lock_nested+0x84/0x4ec [ 0.012372] [c00000017666fb90] [c000000000096998] .smpboot_register_percpu_thread+0x3c/0x10c [ 0.012380] [c00000017666fc30] [c0000000009ba910] .spawn_ksoftirqd+0x28/0x48 [ 0.012386] [c00000017666fcb0] [c00000000000a98c] .do_one_initcall+0xd8/0x1d0 [ 0.012392] [c00000017666fd60] [c00000000000b1f8] .kernel_init+0x120/0x398 [ 0.012398] [c00000017666fe30] [c000000000009ad4] .ret_from_kernel_thread+0x5c/0x64 [ 0.012404] [c00000017666fa00] [c00000017666fb20] 0xc00000017666fb20 [ 0.012410] [c00000017666fa80] [c00000000069371c] .mutex_lock_nested+0x84/0x4ec [ 0.012416] [c00000017666fb90] [c000000000096998] .smpboot_register_percpu_thread+0x3c/0x10c [ 0.012422] [c00000017666fc30] [c0000000009ba910] .spawn_ksoftirqd+0x28/0x48 [ 0.012427] [c00000017666fcb0] [c00000000000a98c] .do_one_initcall+0xd8/0x1d0 [ 0.012433] [c00000017666fd60] [c00000000000b1f8] .kernel_init+0x120/0x398 [ 0.012439] [c00000017666fe30] [c000000000009ad4] .ret_from_kernel_thread+0x5c/0x64 ....... The reason is that the back chain of c00000017666fe30 (ret_from_kernel_thread) contains some invalid value, which might form a loop. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
OPAL provides the firmware base/entry in registers at boot time for debugging purposes. We had a bug in the code trying to stash these into the appropriate kernel globals (a line of code was probably dropped by accident back when this was merged) Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Julia Lawall 提交于
The function initialize_flash_pde_data is only called four times. All four calls are in the function rtas_flash_init, and on the failure of any of the calls, remove_flash_pde is called on the third argument of each of the calls. There is thus no need for initialize_flash_pde_data to call remove_flash_pde on the same argument. remove_flash_pde kfrees the data field of its argument, and does not clear that field, so this amounts ot a possible double free. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r@ identifier f,free,a; parameter list[n] ps; type T; expression e; @@ f(ps,T a,...) { ... when any when != a = e if(...) { ... free(a); ... return ...; } ... when any } @@ identifier r.f,r.free; expression x,a; expression list[r.n] xs; @@ * x = f(xs,a,...); if (...) { ... free(a); ... return ...; } // </smpl> Signed-off-by: NJulia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
The last user of udbg_read() was removed in 2005, in commit fca5dcd4 "Simplify and clean up the xmon terminal I/O". Given we haven't needed it for 7 years we can probably drop it. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tony Breeds 提交于
Since the "Disintegrate asm/system.h for PowerPC" (ae3a197e) This has been failing when DEBUG is #defined. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 18 10月, 2012 1 次提交
-
-
由 Deepthi Dharwar 提交于
smt_snooze_delay was designed to delay idle loop's nap entry in the native idle code before it got ported over to use as part of the cpuidle framework. A -ve value assigned to smt_snooze_delay should result in busy looping, in other words disabling the entry to nap state. - https://lists.ozlabs.org/pipermail/linuxppc-dev/2010-May/082450.html This particular functionality can be achieved currently by echo 1 > /sys/devices/system/cpu/cpu*/state1/disable but it is broken when one assigns -ve value to the smt_snooze_delay variable either via sysfs entry or ppc64_cpu util. This patch aims to fix this, by disabling nap state when smt_snooze_delay variable is set to -ve value. Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 06 10月, 2012 1 次提交
-
-
由 Denys Vlasenko 提交于
This is a preparatory patch for the introduction of NT_SIGINFO elf note. Make the location of compat_siginfo_t uniform across eight architectures which have it. Now it can be pulled in by including asm/compat.h or linux/compat.h. Most of the copies are verbatim. compat_uid[32]_t had to be replaced by __compat_uid[32]_t. compat_uptr_t had to be moved up before compat_siginfo_t in asm/compat.h on a several architectures (tile already had it moved up). compat_sigval_t had to be relocated from linux/compat.h to asm/compat.h. Signed-off-by: NDenys Vlasenko <vda.linux@googlemail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Amerigo Wang <amwang@redhat.com> Cc: "Jonathan M. Foote" <jmfoote@cert.org> Cc: Roland McGrath <roland@hack.frob.com> Cc: Pedro Alves <palves@redhat.com> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 10月, 2012 1 次提交
-
-
由 Anton Blanchard 提交于
There are a number of issues in the recent IOMMU pools code: - On a preempt kernel we might switch CPUs in the middle of building a scatter gather list. When this happens the handle hint passed in no longer falls within the local CPU's pool. Check for this and fall back to the pool hint. - We were missing a spin_unlock/spin_lock in one spot where we switch pools. - We need to provide locking around dart_tlb_invalidate_all and dart_tlb_invalidate_one now that the global lock is gone. Reported-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> CC: <stable@kernel.org> [v3.6]
-
- 03 10月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
This function is used by sparc, powerpc and arm64 for compat support. The patch adds a generic implementation which calls do_sendfile() directly and avoids set_fs(). The sparc architecture has wrappers for the sign extensions while powerpc relies on the compiler to do the this. The patch adds wrappers for powerpc to handle the u32->int type conversion. compat_sys_sendfile64() can be replaced by a sys_sendfile() call since compat_loff_t has the same size as off_t on a 64-bit system. On powerpc, the patch also changes the 64-bit sendfile call from sys_sendile64 to sys_sendfile. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 01 10月, 2012 3 次提交
-
-
由 Richard Weinberger 提交于
This include is no longer needed. (seems to be a leftover from try_to_freeze()) Signed-off-by: NRichard Weinberger <richard@nod.at> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
the only non-obvious part is that current_pt_regs() is really needed here - task_pt_regs() is NULL for kernel threads; it's OK for ptrace uses (the thing task_pt_regs() is intended for), but not for us. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
... and get rid of in-kernel syscalls in kernel_thread() Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 27 9月, 2012 1 次提交
-
-
由 Michael Ellerman 提交于
In commit 407821a3 we assigned a poison value to the paca->data_offset. Unfortunately with CONFIG_LOCK_STAT=y lockdep will read & write to percpu data very early in boot, prior to us initialising the percpu areas, leading to a crash. We have been getting away with this because the data_offset was previously set to zero. This causes lockdep to read & write to the initial copy of the percpu variables, which are discarded later in boot. Although that is "fishy", it does work, and for lock statistics it is no big deal to discard the counts from early boot. So set the paca->data_offset = 0 for the boot cpu paca only. Reported-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Tested-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 25 9月, 2012 4 次提交
-
-
由 Frederic Weisbecker 提交于
Move the code that finds out to which context we account the cputime into generic layer. Archs that consider the whole time spent in the idle task as idle time (ia64, powerpc) can rely on the generic vtime_account() and implement vtime_account_system() and vtime_account_idle(), letting the generic code to decide when to call which API. Archs that have their own meaning of idle time, such as s390 that only considers the time spent in CPU low power mode as idle time, can just override vtime_account(). Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org>
-
由 Frederic Weisbecker 提交于
Use a naming based on vtime as a prefix for virtual based cputime accounting APIs: - account_system_vtime() -> vtime_account() - account_switch_vtime() -> vtime_task_switch() It makes it easier to allow for further declension such as vtime_account_system(), vtime_account_idle(), ... if we want to find out the context we account to from generic code. This also make it better to know on which subsystem these APIs refer to. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org>
-
由 John Stultz 提交于
To help migrate archtectures over to the new update_vsyscall method, redfine CONFIG_GENERIC_TIME_VSYSCALL as CONFIG_GENERIC_TIME_VSYSCALL_OLD Cc: Tony Luck <tony.luck@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Turner <pjt@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
由 John Stultz 提交于
Since users will need to include timekeeper_internal.h, move update_vsyscall definitions to timekeeper_internal.h. Cc: Tony Luck <tony.luck@intel.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Turner <pjt@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
- 19 9月, 2012 1 次提交
-
-
由 Zhao Chenhui 提交于
During suspend, all interrupts including IPI will be disabled. In this case, the suspend process will hang in SMP. To prevent this, pass the flag IRQF_NO_SUSPEND when requesting IPI irq. Signed-off-by: NZhao Chenhui <chenhui.zhao@freescale.com> Signed-off-by: NLi Yang <leoli@freescale.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 18 9月, 2012 1 次提交
-
-
由 Tiejun Chen 提交于
We can't emulate stwu since that may corrupt current exception stack. So we will have to do real store operation in the exception return code. Firstly we'll allocate a trampoline exception frame below the kprobed function stack and copy the current exception frame to the trampoline. Then we can do this real store operation to implement 'stwu', and reroute the trampoline frame to r1 to complete this exception migration. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 17 9月, 2012 3 次提交
-
-
由 Li Zhong 提交于
There are a few tracepoints in the interrupt code path, which is before irq_enter(), or after irq_exit(), like trace_irq_entry()/trace_irq_exit() in do_IRQ(), trace_timer_interrupt_entry()/trace_timer_interrupt_exit() in timer_interrupt(). If the interrupt is from idle(), and because tracepoint contains RCU read-side critical section, we could see following suspicious RCU usage reported: [ 145.127743] =============================== [ 145.127747] [ INFO: suspicious RCU usage. ] [ 145.127752] 3.6.0-rc3+ #1 Not tainted [ 145.127755] ------------------------------- [ 145.127759] /root/.workdir/linux/arch/powerpc/include/asm/trace.h:33 suspicious rcu_dereference_check() usage! [ 145.127765] [ 145.127765] other info that might help us debug this: [ 145.127765] [ 145.127771] [ 145.127771] RCU used illegally from idle CPU! [ 145.127771] rcu_scheduler_active = 1, debug_locks = 0 [ 145.127777] RCU used illegally from extended quiescent state! [ 145.127781] no locks held by swapper/0/0. [ 145.127785] [ 145.127785] stack backtrace: [ 145.127789] Call Trace: [ 145.127796] [c00000000108b530] [c000000000013c40] .show_stack +0x70/0x1c0 (unreliable) [ 145.127806] [c00000000108b5e0] [c0000000000f59d8] .lockdep_rcu_suspicious+0x118/0x150 [ 145.127813] [c00000000108b680] [c00000000000fc58] .do_IRQ+0x498/0x500 [ 145.127820] [c00000000108b750] [c000000000003950] hardware_interrupt_common+0x150/0x180 [ 145.127828] --- Exception: 501 at .plpar_hcall_norets+0x84/0xd4 [ 145.127828] LR = .check_and_cede_processor+0x38/0x70 [ 145.127836] [c00000000108bab0] [c0000000000665dc] .shared_cede_loop +0x5c/0x100 [ 145.127844] [c00000000108bb70] [c000000000588ab0] .cpuidle_enter +0x30/0x50 [ 145.127850] [c00000000108bbe0] [c000000000588b0c] .cpuidle_enter_state+0x3c/0xb0 [ 145.127857] [c00000000108bc60] [c000000000589730] .cpuidle_idle_call +0x150/0x6c0 [ 145.127863] [c00000000108bd30] [c000000000058440] .pSeries_idle +0x10/0x40 [ 145.127870] [c00000000108bda0] [c00000000001683c] .cpu_idle +0x18c/0x2d0 [ 145.127876] [c00000000108be60] [c00000000000b434] .rest_init +0x124/0x1b0 [ 145.127884] [c00000000108bef0] [c0000000009d0d28] .start_kernel +0x568/0x588 [ 145.127890] [c00000000108bf90] [c000000000009660] .start_here_common +0x20/0x40 This is because the RCU usage in interrupt context should be used in area marked by rcu_irq_enter()/rcu_irq_exit(), called in irq_enter()/irq_exit() respectively. Move them into the irq_enter()/irq_exit() area to avoid the reporting. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
Increase max addressable range to 64TB. This is not tested on real hardware yet. Reviewed-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
On POWER6 and POWER7 if the input operand to an instruction is a denormalised single precision binary floating point value we can take a denormalisation exception where it's expected that the hypervisor (HV=1) will fix up the inputs before the instruction is run. This adds code to handle this denormalisation exception for POWER6 and POWER7. It also add a CONFIG_PPC_DENORMALISATION option and sets it in pseries/ppc64_defconfig. This is useful on bare metal systems only. Based on patch from Milton Miller. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 13 9月, 2012 7 次提交
-
-
由 Jia Hongtao 提交于
Remove the dependency on PCI initialization for SWIOTLB initialization. So that PCI can be initialized at proper time. SWIOTLB is partly determined by PCI inbound/outbound map which is assigned in PCI initialization. But swiotlb_init() should be done at the stage of mem_init() which is much earlier than PCI initialization. So we reserve the memory for SWIOTLB first and free it if not necessary. All boards are converted to fit this change. Signed-off-by: NJia Hongtao <B38951@freescale.com> Signed-off-by: NLi Yang <leoli@freescale.com> Acked-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Varun Sethi 提交于
Added CPU_FTR_EMB_HV feature check for e5500. Signed-off-by: NVarun Sethi <Varun.Sethi@freescale.com> Signed-off-by: NMihai Caraman <mihai.caraman@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Varun Sethi 提交于
For the 64 bit case separate out e5500 cpu_setup and cpu_restore functions. The cpu_setup function (for the primary core) is passed the cpu_spec pointer, which is not there in case of the cpu_restore function. Also, in our case we will have to manipulate the CPU_FTR_EMB_HV flag on the primary core. Signed-off-by: NVarun Sethi <Varun.Sethi@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Varun Sethi 提交于
Merge the 32 bit cpu setup code for e500mc/e5500 and define the "cpu_restore" routine (for e5500/e6500) only for the 64 bit case. The cpu_restore routine is used in the 64 bit case for setting up the secondary cores. Signed-off-by: NVarun Sethi <Varun.Sethi@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Varun Sethi 提交于
Move the E.HV check and CPU_FTR_EMB_HV flag manipulation to the cpu setup code. Create a separate routine for E.HV ivors setup. Signed-off-by: NVarun Sethi <Varun.Sethi@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Zhao Chenhui 提交于
Add support to disable and re-enable individual cores at runtime on MPC85xx/QorIQ SMP machines. Currently support e500v1/e500v2 core. MPC85xx machines use ePAPR spin-table in boot page for CPU kick-off. This patch uses the boot page from bootloader to boot core at runtime. It supports 32-bit and 36-bit physical address. Signed-off-by: NLi Yang <leoli@freescale.com> Signed-off-by: NJin Qing <b24347@freescale.com> Signed-off-by: NZhao Chenhui <chenhui.zhao@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Zhao Chenhui 提交于
In the case of cpu hotplug, the cpu_state should be set to CPU_UP_PREPARE when kicking cpu. Otherwise, the cpu_state is always CPU_DEAD after calling generic_set_cpu_dead(), which makes the delay in generic_cpu_die() not happen. Signed-off-by: NZhao Chenhui <chenhui.zhao@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 12 9月, 2012 1 次提交
-
-
由 Gavin Shan 提交于
This patch implements pcibios_window_alignment() so powerpc platforms can force P2P bridge windows to be at larger alignments than the PCI spec requires. [bhelgaas: changelog] Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 10 9月, 2012 1 次提交
-
-
由 Michael Neuling 提交于
Currently we mark the DABRX to interrupt on all matches (hypervisor/kernel/user and then filter in software. We can be a lot smarter now that we can set the DABRX dynamically. This sets the DABRX based on the flags passed by the user. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-