- 08 6月, 2012 1 次提交
-
-
由 Paul Mackerras 提交于
This reverts 68568add ("powerpc/time: Remove unnecessary sanity check of decrementer expiration"). We do need to check whether we have reached the expiration time of the next event, because we sometimes get an early decrementer interrupt, most notably when we set the decrementer to 1 in arch_irq_work_raise(). The effect of not having the sanity check is that if timer_interrupt() gets called early, we leave the decrementer set to its maximum value, which means we then don't get any more decrementer interrupts for about 4 seconds (or longer, depending on timebase frequency). I saw these pauses as a consequence of getting a stray hypervisor decrementer interrupt left over from exiting a KVM guest. This isn't quite a straight revert because of changes to the surrounding code, but it restores the same algorithm as was previously used. Cc: stable@vger.kernel.org Acked-by: NAnton Blanchard <anton@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 02 6月, 2012 6 次提交
-
-
由 Al Viro 提交于
Does block_sigmask() + tracehook_signal_handler(); called when sigframe has been successfully built. All architectures converted to it; block_sigmask() itself is gone now (merged into this one). I'm still not too happy with the signature, but that's a separate story (IMO we need a structure that would contain signal number + siginfo + k_sigaction, so that get_signal_to_deliver() would fill one, signal_delivered(), handle_signal() and probably setup...frame() - take one). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
... it's just a call of set_current_blocked() now Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Only 3 out of 63 do not. Renamed the current variant to __set_current_blocked(), added set_current_blocked() that will exclude unblockable signals, switched open-coded instances to it. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
replace boilerplate "should we use ->saved_sigmask or ->blocked?" with calls of obvious inlined helper... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
first fruits of ..._restore_sigmask() helpers: now we can take boilerplate "signal didn't have a handler, clear RESTORE_SIGMASK and restore the blocked mask from ->saved_mask" into a common helper. Open-coded instances switched... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 28 5月, 2012 1 次提交
-
-
由 Paul Mackerras 提交于
This is much the same as for SPARC except that we can do the find_zero() function more efficiently using the count-leading-zeroes instructions. Tested on 32-bit and 64-bit PowerPC. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 5月, 2012 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 22 5月, 2012 4 次提交
-
-
由 Kim Phillips 提交于
setting CONFIG_IRQ_ALL_CPUS distributes IRQs to CPUs only when the number of online CPUs equals NR_CPUS. See commit 280ff974 "sparc64: fix and optimize irq distribution" for more details. Using the online mask fixes IRQ-to-CPU distribution on systems that boot with less than NR_CPUS. Signed-off-by: NKim Phillips <kim.phillips@freescale.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This reverts commit 1b788400. It causes oopses when passed incorrect arguments and has a design fault using IPIs with interrupts disabled. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> ---
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
guts of saved_sigmask-based sigsuspend/rt_sigsuspend. Takes kernel sigset_t *. Open-coded instances replaced with calling it. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 17 5月, 2012 1 次提交
-
-
由 Suresh Siddha 提交于
Historical prepare_to_copy() is mostly a no-op, duplicated for majority of the architectures and the rest following the x86 model of flushing the extended register state like fpu there. Remove it and use the arch_dup_task_struct() instead. Suggested-by: NOleg Nesterov <oleg@redhat.com> Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Link: http://lkml.kernel.org/r/1336692811-30576-1-git-send-email-suresh.b.siddha@intel.comAcked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Chris Zankel <chris@zankel.net> Cc: Richard Henderson <rth@twiddle.net> Cc: Russell King <linux@arm.linux.org.uk> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Mike Frysinger <vapier@gentoo.org> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chen Liqin <liqin.chen@sunplusct.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 16 5月, 2012 1 次提交
-
-
由 Kent Yoder 提交于
This patch adds the cas bits to advertise support for the Platform Facilities Option (PFO) based encryption accelerator device. The nx device driver provides support for this hardware feature. Signed-off-by: NKent Yoder <key@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 5月, 2012 4 次提交
-
-
由 Kent Yoder 提交于
This patch adds the cas bits to advertise support for the Platform Facilities Option (PFO) based random number generator accerator. The pseries-rng driver provides support for this hardware feature. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NKent Yoder <key@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kent Yoder 提交于
Add support for the Platform Facilities Option (PFO) to the VIO bus. These devices have a separate root node in OpenFirmware which requires additional parsing to map into the existing VIO device structure fields. This adds the interface for PFO device drivers to make synchronous hypervisor calls. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NKent Yoder <key@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 K.Prasad 提交于
PPC_PTRACE_GETHWDBGINFO, PPC_PTRACE_SETHWDEBUG and PPC_PTRACE_DELHWDEBUG are PowerPC specific ptrace flags that use the watchpoint register. While they are targeted primarily towards BookE users, user-space applications such as GDB have started using them for BookS too. This patch enables the use of generic hardware breakpoint interfaces for these new flags. Apart from the usual benefits of using generic hw-breakpoint interfaces, these changes allow debuggers (such as GDB) to use a common set of ptrace flags for their watchpoint needs and allow more precise breakpoint specification (length of the variable can be specified). Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com> Acked-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Robert Jennings 提交于
This patch changes the architecture vector to advertise support for a lower minimum virtual processor entitled capacity. The default minimum without this patch is 10%, this patch specifies 1%. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 12 5月, 2012 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
So we have another case of paca->irq_happened getting out of sync with the HW irq state. This can happen when a perfmon interrupt occurs while soft disabled, as it will return to a soft disabled but hard enabled context while leaving a stale PACA_IRQ_HARD_DIS flag set. This patch fixes it, and also adds a test for the condition of those flags being out of sync in arch_local_irq_restore() when CONFIG_TRACE_IRQFLAGS is enabled. This helps catching those gremlins faster (and so far I can't seem see any anymore, so that's good news). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 5月, 2012 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Alignment was the last user of the ENABLE_INTS macro, which we can now remove. All non-syscall exceptions now disable interrupts on entry, they get re-enabled conditionally from C code. Don't unconditionally re-enable in program check either, check the original context. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
We had a case where we could turn on hard interrupts while leaving the PACA_IRQ_HARD_DIS bit set in the PACA. This can in turn cause a BUG_ON() to hit in __check_irq_replay() due to interrupt state getting out of sync. The assembly code was also way too convoluted. Instead, we now leave it to the C code to do the right thing which ends up being smaller and more readable. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 08 5月, 2012 3 次提交
-
-
由 Thomas Gleixner 提交于
The core now has a threadinfo allocator which uses a kmemcache when THREAD_SIZE < PAGE_SIZE. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20120505150142.059161130@linutronix.de
-
由 Thomas Gleixner 提交于
cpuidle uses a generic function now. Remove the cruft. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20120507175652.330322737@linutronix.de
-
由 Thomas Gleixner 提交于
commit 771dae81 (powerpc/cpuidle: Add cpu_idle_wait() to allow switching of idle routines) implemented cpu_idle_wait() for powerpc. The changelog says: "The equivalent routine for x86 is in arch/x86/kernel/process.c but the powerpc implementation is different.": Unfortunately the changelog is completely useless as it does not tell _WHY_ it is different. Aside of being different the implementation is patently wrong. The rescheduling IPI is async. That means that there is no guarantee, that the other cores have executed the IPI when cpu_idle_wait() returns. But that's the whole purpose of this function: to guarantee that no CPU uses the old idle handler anymore. Use the smp_functional_call() based implementation, which fulfils the requirements. [ This code is going to replaced by a core version to remove all the pointless copies in arch/*, but this one should go to stable ] Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Cc: Trinabh Gupta <g.trinabh@gmail.com> Cc: Arun R Bharadwaj <arun.r.bharadwaj@gmail.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20120507175651.980164748@linutronix.de Cc: stable@vger.kernel.org
-
- 06 5月, 2012 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This is necessary for qemu to be able to pass the right information to the guest, such as the supported page sizes and corresponding encodings in the SLB and hash table, which can vary depending on the processor type, the type of KVM used (PR vs HV) and the version of KVM Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> [agraf: fix compilation on hv, adjust for newer ioctl numbers] Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
Time for which the hrtimer is started for decrementer emulation is calculated using tb_ticks_per_usec. While hrtimer uses the clockevent for DEC reprogramming (if needed) and which calculate timebase ticks using the multiplier and shifter mechanism implemented within clockevent layer. It was observed that this conversion (timebase->time->timebase) are not correct because the mechanism are not consistent. In our setup it adds 2% jitter. With this patch clockevent multiplier and shifter mechanism are used when starting hrtimer for decrementer emulation. Now the jitter is < 0.5%. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 05 5月, 2012 1 次提交
-
-
由 Thomas Gleixner 提交于
Same code. Use the generic version. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20120503085035.211123184@linutronix.de
-
- 03 5月, 2012 2 次提交
-
-
由 Suzuki Poulose 提交于
This patch adds support for creating 1:1 mapping for the PPC_47x during a KEXEC. The implementation is similar to that of the PPC440x which is described here : http://patchwork.ozlabs.org/patch/104323/ PPC_47x MMU : The 47x uses Unified TLB 1024 entries, with 4-way associative mapping (4 x 256 entries). The index to be used is calculated by the MMU by hashing the PID, EPN and TS. The software can choose to specify the way by setting bit 0(enable way select) and the way in bits 1-2 in the TLB Word 0. Implementation: The patch erases all the UTLB entries which includes the tlb covering the mapping for our code. The shadow TLB caches the mapping for the running code which helps us to continue the execution until we do isync/rfi. We then create a tmp mapping for the current code in the other address space (TS) and switch to it. Then we create a 1:1 mapping(EPN=RPN) for 0-2GiB in the original address space and switch to the new mapping. TODO: Add SMP support. Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
由 Suzuki Poulose 提交于
Initialize the PID register with kernel pid (0) before we start setting the TLB mapping for KEXEC. Also set the MMUCR[TID] to kernel PID. This was spotted while testing the kexec on ISS for 47x. ISS doesn't return a successful tlbsx for a kernel address with PID set to a user PID. Though the hardware/qemu/simics work fine. This patch is harmless and initializes the PID to 0 (kernel PID) which is usually the case during a normal kernel boot. This would fix the kexec on ISS for 440. I have tested this patch on sequoia board. Signed-off-by: NSuzuki K Poulose <suzuki@in.ibm.com> Cc: Josh Boyer <jwboyer@gmail.com> Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
- 30 4月, 2012 10 次提交
-
-
由 Anton Blanchard 提交于
PowerPC has non standard getregs calls that only dump the GPRs or FPRs and have their arguments reversed. commit e17666ba (ptrace updates & new, better requests) in 2.6.3 deprecated them and introduced more standard versions. It's been about 5 years and I know of no users of the old calls so lets remove them. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Remove CONFIG_POWER4_ONLY, the option is badly named and only does two things: - It wraps the MMU segment table code. With feature fixups there is little downside to compiling this in. - It uses the newer mtocrf instruction in various assembly functions. Instead of making this a compile option just do it at runtime via a feature fixup. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Add two optimisations to enable_kernel_altivec: - enable_kernel_altivec has already determined if we need to save the previous task's state but we call giveup_altivec in both cases, requiring an extra branch in giveup_altivec. Create giveup_altivec_notask which only turns on the VMX bit in the MSR. - We write the VMX MSR bit each time we call enable_kernel_altivec even it was already set. Check the bit and branch out if we have already set it. The classic case for this is vectored IO where we have to copy multiple buffers to or from userspace. The following testcase was used to confirm this patch improves performance: http://ozlabs.org/~anton/junkcode/copy_to_user.c Since the current breakpoint for using VMX in copy_tofrom_user is 4096 bytes, I'm using buffers of 4096 + 1 cacheline (4224) bytes. A benchmark of 16 entry readvs (-s 16): time copy_to_user -l 4224 -s 16 -i 1000000 completes 5.2% faster on a POWER7 PS700. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Use an empty inline instead of an empty function to implement giveup_altivec on book3e CPUs, similar to flush_altivec_to_thread. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Remove all the iseries specific fields in the lppaca. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
At the moment system call entry looks like: crclr so ... mfcr r9 ... std r9,_CCR(r1) commit bd19c899 ([POWERPC] system call micro optimisation) put some space between the crclr and mfcr in order to avoid a stall. There is still a stall seen between the mfcr and std. We can avoid the crclr by doing it in a GPR with rlwinm which gives us more room to better schedule the sequence. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The count register is volatile so we don't need to preserve it. Store zero to the entry in the exception frame. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The XER is a volatile register so there is no need to save and restore it over a system call - zero it out in the exception stack frame instead. This should fix a 5 cycle stall of the mfxer/std seen on POWER7. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
syscall_dotrace_cont and syscall_error_cont tend to complicate perf output so make them local. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Grant Likely 提交于
The switch from using irq_map to irq_alloc_desc*() for managing irq number allocations introduced new bugs in some of the powerpc interrupt code. Several functions rely on the value of NR_IRQS to determine the maximum irq number that could get allocated. However, with sparse_irq and using irq_alloc_desc*() the maximum possible irq number is now specified with 'nr_irqs' which may be a number larger than NR_IRQS. This has caused breakage on powermac when CONFIG_NR_IRQS is set to 32. This patch removes most of the direct references to NR_IRQS in the powerpc code and replaces them with either a nr_irqs reference or by using the common for_each_irq_desc() macro. The powerpc-specific for_each_irq() macro is removed at the same time. Also, the Cell axon_msi driver is refactored to remove the global build assumption on the size of NR_IRQS and instead add a limit to the maximum irq number when calling irq_domain_add_nomap(). Signed-off-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-