- 28 11月, 2011 5 次提交
-
-
Signed-off-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Acked-by: NNicolas Ferre <nicolas.ferre@atmel.com>
-
Signed-off-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Acked-by: NNicolas Ferre <nicolas.ferre@atmel.com>
-
Signed-off-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Acked-by: NNicolas Ferre <nicolas.ferre@atmel.com>
-
Signed-off-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Acked-by: NNicolas Ferre <nicolas.ferre@atmel.com> Reviewed-by: NRyan Mallon <rmallon@gmail.com>
-
We have a clocksource which renders CLOCK_TICK_RATE useless. Define it to a bogus value to get rid of some ifdeffery. use local LATCH for at91rm9200 timer but keep it for A91X40 as we do not use a clocksource Signed-off-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Cc: Nicolas Ferre <nicolas.ferre@atmel.com>
-
- 22 11月, 2011 2 次提交
-
-
由 Al Viro 提交于
altroot support has been gone for years, along with arch/*/asm/namei.h; looks like a dummy survivor that sat it out in microblaze tree... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
wrong register returned... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 11月, 2011 1 次提交
-
-
由 Avi Kivity 提交于
Prevent tracing of preempt_disable() in get_cpu_var() in kvm_clock_read(). When CONFIG_DEBUG_PREEMPT is enabled, preempt_disable/enable() are traced and this causes the function_graph tracer to go into an infinite recursion. By open coding the preempt_disable() around the get_cpu_var(), we can use the notrace version which prevents preempt_disable/enable() from being traced and prevents the recursion. Based on a similar patch for Xen from Jeremy Fitzhardinge. Tested-by: NGleb Natapov <gleb@redhat.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 18 11月, 2011 4 次提交
-
-
由 David S. Miller 提交于
Some of the sun4v code patching occurs in inline functions visible to, and usable by, modules. Therefore we have to patch them up during module load. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
To handle the large physical addresses, just make a simple wrapper around remap_pfn_range() like MIPS does. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rafael J. Wysocki 提交于
Fix build regression introduced by commit 056879d2 (ARM: mach-shmobile: sh7372 A3SP no_suspend_console fix) by moving the intialization of the A3SP domain to a separate function and providing an empty definition of it for CONFIG_PM unset. Reported-and-tested-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Russell King 提交于
These two syscalls were introduced during the last merge window. Add the entries into the ARM call tables for them. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 11月, 2011 16 次提交
-
-
由 Alexander Graf 提交于
This reverts commit a15bd354. It exceeded the padding on the SREGS struct, rendering the ABI backwards-incompatible. Conflicts: arch/powerpc/kvm/powerpc.c include/linux/kvm.h Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Support guest/host-only profiling by switch perf msrs on a guest entry if needed. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Gleb Natapov 提交于
Some cpus have special support for switching PERF_GLOBAL_CTRL msr. Add logic to detect if such support exists and works properly and extend msr switching code to use it if available. Also extend number of generic msr switching entries to 8. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
KVM on s390 always had a sync mmu. Any mapping change in userspace mapping was always reflected immediately in the guest mapping. - In older code the guest mapping was just an offset - In newer code the last level page table is shared Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
There is a potential host deadlock in the tprot intercept handling. We must not hold the mmap semaphore while resolving the guest address. If userspace is remapping, then the memory detection in the guest is broken anyway so we can safely separate the address translation from walking the vmas. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Cornelia Huck 提交于
SIGP sense running may cause an intercept on higher level virtualization, so handle it by checking the CPUSTAT_RUNNING flag. Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Cornelia Huck 提交于
CPUSTAT_RUNNING was implemented signifying that a vcpu is not stopped. This is not, however, what the architecture says: RUNNING should be set when the host is acting on the behalf of the guest operating system. CPUSTAT_RUNNING has been changed to be set in kvm_arch_vcpu_load() and to be unset in kvm_arch_vcpu_put(). For signifying stopped state of a vcpu, a host-controlled bit has been used and is set/unset basically on the reverse as the old CPUSTAT_RUNNING bit (including pushing it down into stop handling proper in handle_stop()). Cc: stable@kernel.org Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Will Deacon 提交于
On PPC64, put_sigset_t converts a sigset_t to a compat_sigset_t before copying it to userspace. There is a typo in the case that we have 4 words to copy, meaning that we corrupt the compat_sigset_t. It appears that _NSIG_WORDS can't be greater than 2 at the moment so this code is probably always optimised away anyway. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
The Documentation/memory-barriers.txt document requires that atomic operations that return a value act as a memory barrier both before and after the actual atomic operation. Our current implementation doesn't guarantee this. More specifically, while a load following the isync can not be issued before stwcx. has completed, that completion doesn't architecturally means that the result of stwcx. is visible to other processors (or any previous stores for that matter) (typically, the other processors L1 caches can still hold the old value). This has caused an actual crash in RCU torture testing on Power 7 This fixes it by changing those atomic ops to use new macros instead of RELEASE/ACQUIRE barriers, called ATOMIC_ENTRY and ATMOIC_EXIT barriers, which are then defined respectively to lwsync and sync. I haven't had a chance to measure the performance impact (or rather what I measured with kernel compiles is in the noise, I yet have to find a more precise benchmark) Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Kyle Moffett 提交于
Recent binutils refuses to assemble AltiVec opcodes when in e500/SPE mode, as some of those opcodes alias the "SPE" instructions. This triggers an ancient binutils version check even when building a kernel with CONFIG_ALTIVEC disabled. In theory, the check could be conditionalized on CONFIG_ALTIVEC, but in practice it has long outlived its utility. It is virtually impossible to find binutils older than 2.12.1 (released 2002) in the wild anymore. Even ancient RedHat Enterprise Linux 4 has binutils-2.14. To fix the kernel build when done natively on e500 systems with this new binutils, the test is simply removed. Signed-off-by: NKyle Moffett <Kyle.D.Moffett@boeing.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kumar Gala 提交于
With the introduction of CONFIG_PPC_ADV_DEBUG_REGS user space debug is broken on Book-E 64-bit parts that support delayed debug events. When switch_booke_debug_regs() sets DBCR0 we'll start getting debug events as MSR_DE is also set and we aren't able to handle debug events from kernel space. We can remove the hack that always enables MSR_DE and loads up DBCR0 and just utilize switch_booke_debug_regs() to get user space debug working again. We still need to handle critical/debug exception stacks & proper save/restore of state for those exception levles to support debug events from kernel space like we have on 32-bit. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kumar Gala 提交于
All of DebugException is already protected by CONFIG_PPC_ADV_DEBUG_REGS there is no need to have another such ifdef inside the function. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kumar Gala 提交于
We had an existing ifdef for 4xx & BOOKE processors that got changed to CONFIG_PPC_ADV_DEBUG_REGS. The define has nothing to do with CONFIG_PPC_ADV_DEBUG_REGS. The define really should be: #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE) and not #ifdef CONFIG_PPC_ADV_DEBUG_REGS Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Zhenzhong Duan 提交于
PVHVM running with more than 32 vcpus and pv_irq/pv_time enabled need VCPU placement to work, or else it will softlockup. CC: stable@kernel.org Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NZhenzhong Duan <zhenzhong.duan@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 David Vrabel 提交于
When mapping a foreign page with xenbus_map_ring_valloc() with the GNTTABOP_map_grant_ref hypercall, set the GNTMAP_contains_pte flag and pass a pointer to the PTE (in init_mm). After the page is mapped, the usual fault mechanism can be used to update additional MMs. This allows the vmalloc_sync_all() to be removed from alloc_vm_area(). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NAndrew Morton <akpm@linux-foundation.org> [v1: Squashed fix by Michal for no-mmu case] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NMichal Simek <monstr@monstr.eu>
-
- 16 11月, 2011 12 次提交
-
-
由 Geoff Levand 提交于
Move the PS3 IPI message setup from ps3_smp_setup_cpu() to ps3_smp_probe(). Fixes startup warnings like these: ------------[ cut here ]------------ WARNING: at kernel/lockdep.c:2649 Modules linked in: ... ---[ end trace 31fd0ba7d8756001 ]--- Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Geoff Levand 提交于
Fixes the PS3 bootup hang introduced in 3.0-rc1 by: commit 317f3941 sched: Move the second half of ttwu() to the remote cpu Move the PS3's LV1 EOI call lv1_end_of_interrupt_ext() from ps3_chip_eoi() to ps3_get_irq() for IPI messages. If lv1_send_event_locally() is called between a previous call to lv1_send_event_locally() and the coresponding call to lv1_end_of_interrupt_ext() the second event will not be delivered to the target cpu. The PS3's SMP IPIs are implemented using lv1_send_event_locally(), so if two IPI messages of the same type are sent to the same target in a relatively short period of time the second IPI event can become lost when lv1_end_of_interrupt_ext() is called from ps3_chip_eoi(). CC: stable@kernel.org Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
If you build with KVM and UP it fails with the following due to a missing include. /arch/powerpc/kvm/book3s_hv.c: In function 'do_h_register_vpa': arch/powerpc/kvm/book3s_hv.c:156:10: error: 'H_PARAMETER' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c:156:10: note: each undeclared identifier is reported only once for each function it appears in arch/powerpc/kvm/book3s_hv.c:192:12: error: 'H_RESOURCE' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c:222:9: error: 'H_SUCCESS' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c: In function 'kvmppc_pseries_do_hcall': arch/powerpc/kvm/book3s_hv.c:228:30: error: 'H_SUCCESS' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c:232:7: error: 'H_CEDE' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c:234:7: error: 'H_PROD' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c:238:10: error: 'H_PARAMETER' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c:250:7: error: 'H_CONFER' undeclared (first use in this function) arch/powerpc/kvm/book3s_hv.c:252:7: error: 'H_REGISTER_VPA' undeclared (first use in this function) make[2]: *** [arch/powerpc/kvm/book3s_hv.o] Error 1 Signed-off-by: NMichael Neuling <mikey@neuling.org> cc: stable@kernel.org (3.1 only) Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1. If an exception occurs in user mode, there is only one stack frame on the stack and accessing the CALLER_ADDR1 will causes the following call trace. So we create a dummy stack frame to make trace_hardirqs_off happy. WARNING: at kernel/smp.c:459 Modules linked in: NIP: c0093280 LR: c00930a0 CTR: c0010780 REGS: edb87ae0 TRAP: 0700 Not tainted (3.1.0) MSR: 00021002 <ME,CE> CR: 28002888 XER: 00000000 TASK = edce2ac0[17658] 'mthread-lock-on' THREAD: edb86000 CPU: 5 GPR00: 00000001 edb87b90 edce2ac0 00000005 c0019594 edb87bd8 00000001 00000fe3 GPR08: 00041000 c084138c 4e20120d edb87b90 48002888 1001aa7c 00000000 00000000 GPR16: 48830000 10012a8c 00000000 10000af4 00000001 c0810000 00000000 00000000 GPR24: ee9aa920 c0816a18 00000000 00000005 c0019594 edb87bd8 ee20178c edb87b90 NIP [c0093280] smp_call_function_many+0x214/0x2b4 LR [c00930a0] smp_call_function_many+0x34/0x2b4 Call Trace: [edb87b90] [c00930a0] smp_call_function_many+0x34/0x2b4 (unreliable) [edb87bd0] [c00194ec] __flush_tlb_page+0xac/0x100 [edb87c00] [c001957c] flush_tlb_page+0x3c/0x54 [edb87c10] [c00180ac] ptep_set_access_flags+0x74/0x12c [edb87c40] [c0128068] handle_pte_fault+0x2f0/0x9ac [edb87cb0] [c0128c3c] handle_mm_fault+0x104/0x1dc [edb87ce0] [c05f40f4] do_page_fault+0x2dc/0x630 [edb87e50] [c001078c] handle_page_fault+0xc/0x80 Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
kdump fails because we try to execute an HV only instruction. Feature fixups are being applied after we copy the exception vectors down to 0 so they miss out on any updates. We have always had this issue but it only became critical in v3.0 when we added CFAR support (breaks POWER5) and v3.1 when we added POWERNV (breaks everyone). Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: <stable@kernel.org> [v3.0+] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
I had to debug a strange situation where all manner of things were failing. SMT threads, storage and network were all completely broken. The root cause was we couldn't find enough memory to instantiate RTAS - this was a network install so the initrd was huge. Instead of limping along and failing in mysterious ways we should just panic up front if RTAS exists and we can't allocate space for it. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Suzuki Poulose 提交于
Kexec is not supported on 47x. 47x is a variant of 44x with slightly different MMU and SMP support. There was a typo in the config dependency for kexec. This patch fixes the same. Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Josh Boyer <jwboyer@gmail.com> Cc: linux ppc dev <linuxppc-dev@lists.ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Al Viro 提交于
Should do what other architectures do and wrap all that code into the appropriate ifdef Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mathias Krause 提交于
The address limit is already set in flush_old_exec() so this set_fs(USER_DS) is redundant. Signed-off-by: NMathias Krause <minipli@googlemail.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NGuan Xuetao <gxt@mprc.pku.edu.cn>
-
由 Paul Bolle 提交于
Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Acked-by: NGuan Xuetao <gxt@mprc.pku.edu.cn>
-
由 David S. Miller 提交于
As per the comments added by this commit, %g2 turns out to not be a usable place to save away orig_i0 for syscall restart handling. In fact all of %g2, %g3, %g4, and %g5 are assumed to be saved across a system call by various bits of code in glibc. %g1 can't be used because that holds the syscall number, which would need to be saved and restored for syscall restart handling too, and that would only compound our problems :-) This leaves us with %g6 and %g7 which are for "system use". %g7 is used as the "thread register" by glibc, but %g6 is used as a compiler and assembler temporary scratch register. And in no instance is %g6 used to hold a value across a system call. Therefore %g6 is safe for storing away orig_i0, at least for now. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-