- 21 6月, 2013 1 次提交
-
-
由 Ben Hutchings 提交于
1. Check for allocation failure 2. Clear the buffer contents, as they may actually be written to flash 3. Don't leak the buffer Compile-tested only. [ Tested successfully on my buggy ASUS machine - Matt ] Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Cc: stable@vger.kernel.org Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 20 6月, 2013 6 次提交
-
-
由 Michel Lespinasse 提交于
The following change fixes the x86 implementation of trigger_all_cpu_backtrace(), which was previously (accidentally, as far as I can tell) disabled to always return false as on architectures that do not implement this function. trigger_all_cpu_backtrace(), as defined in include/linux/nmi.h, should call arch_trigger_all_cpu_backtrace() if available, or return false if the underlying arch doesn't implement this function. x86 did provide a suitable arch_trigger_all_cpu_backtrace() implementation, but it wasn't actually being used because it was declared in asm/nmi.h, which linux/nmi.h doesn't include. Also, linux/nmi.h couldn't easily be fixed by including asm/nmi.h, because that file is not available on all architectures. I am proposing to fix this by moving the x86 definition of arch_trigger_all_cpu_backtrace() to asm/irq.h. Tested via: echo l > /proc/sysrq-trigger Before the change, this uses a fallback implementation which shows backtraces on active CPUs (using smp_call_function_interrupt() ) After the change, this shows NMI backtraces on all CPUs Signed-off-by: NMichel Lespinasse <walken@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1370518875-1346-1-git-send-email-walken@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Jed Davis 提交于
With this change, we no longer lose the innermost entry in the user-mode part of the call chain. See also the x86 port, which includes the ip, and the corresponding change in arch/arm. Signed-off-by: NJed Davis <jld@mozilla.com> Acked-by: NIngo Molnar <mingo@kernel.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Aneesh Kumar K.V 提交于
Book3E uses the hugepd at PMD level and don't encode pte directly at the pmd level. So it will find the lower bits of pmd set and the pmd_bad check throws error. Infact the current code will never take the free_hugepd_range call at all because it will clear the pmd if it find a hugepd pointer. Fix this by clearing bad pmd only if it is not a hugepd pointer. This is regression introduced by e2b3d202 "powerpc: Switch 16GB and 16MB explicit hugepages to a different page table format" Reported-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Gortmaker 提交于
We are in the process of removing all the __cpuinit annotations. While working on making that change, an existing problem was made evident: WARNING: arch/x86/kernel/built-in.o(.text+0x198f2): Section mismatch in reference from the function cpu_init() to the function .init.text:load_ucode_ap() The function cpu_init() references the function __init load_ucode_ap(). This is often because cpu_init lacks a __init annotation or the annotation of load_ucode_ap is wrong. This now appears because in my working tree, cpu_init() is no longer tagged as __cpuinit, and so the audit picks up the mismatch. The 2nd hypothesis from the audit is the correct one, as there was an incorrect __init tag on the prototype in the header (but __cpuinit was used on the function itself.) The audit is telling us that the prototype's __init annotation took effect and the function did land in the .init.text section. Checking with objdump on a mainline tree that still has __cpuinit shows that the __cpuinit on the function takes precedence over the __init on the prototype, but that won't be true once we make __cpuinit a no-op. Even though we are removing __cpuinit, we temporarily align both the function and the prototype on __cpuinit so that the changeset can be applied to stable trees if desired. [ hpa: build fix only, no object code change ] Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: stable <stable@vger.kernel.org> # 3.9+ Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Link: http://lkml.kernel.org/r/1371654926-11729-1-git-send-email-paul.gortmaker@windriver.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 David Daney 提交于
We need to pick up the definition of raw_smp_processor_id() from asm/smp.h. For the !SMP case, we need to supply a definition of raw_smp_processor_id(). Because of the include dependencies we cannot use smp_call_func_t in asm/smp.h, but we do need linux/thread_info.h Signed-off-by: NDavid Daney <david.daney@cavium.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 James Hogan 提交于
Commit 106c992a ("mm/hugetlb: add more arch-defined huge_pte functions") added an include of <asm-generic/hugetlb.h> to each architecture's <asm/hugetlb.h> (except s390). Unfortunately metag was missed which resulted in build errors when hugetlbfs is enabled (see below). Add the include for metag too to fix the build errors: mm/hugetlb.c In function 'make_huge_pte': mm/hugetlb.c +2250 : error: implicit declaration of function 'huge_pte_mkwrite' mm/hugetlb.c +2250 : error: implicit declaration of function 'huge_pte_mkdirty' ... Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.cz> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 6月, 2013 18 次提交
-
-
由 Stephane Eranian 提交于
This patch fixes broken support of PEBS-LL on SNB-EP/IVB-EP. For some reason, the LDLAT extra reg definition for snb_ep showed up as duplicate in the snb table. This patch moves the definition of LDLAT back into the snb_ep table. Thanks to Don Zickus for tracking this one down. Signed-off-by: NStephane Eranian <eranian@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130607212210.GA11849@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Igor Mammedov 提交于
kernel might hung in pvclock_clocksource_read() due to uninitialized memory might contain odd version value in following cycle: do { version = __pvclock_read_cycles(src, &ret, &flags); } while ((src->version & 1) || version != src->version); if secondary kvmclock is accessed before it's registered with kvm. Clear garbage in pvclock shared memory area right after it's allocated to avoid this issue. Ref: https://bugzilla.kernel.org/show_bug.cgi?id=59521Signed-off-by: NIgor Mammedov <imammedo@redhat.com> [See BZ for analysis. We may want a different fix for 3.11, but this is the safest for now - Paolo] Cc: <stable@vger.kernel.org> # 3.8 Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Scott Wood 提交于
kwmppc_lazy_ee_enable() should be called as late as possible, or else we get things like WARN_ON(preemptible()) in enable_kernel_fp() in configurations where preemptible() works. Note that book3s_pr already waits until just before __kvmppc_vcpu_run to call kvmppc_lazy_ee_enable(). Signed-off-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Dave Kleikamp 提交于
This fixes a race where a cpu may re-load a tlb from a stale tsb right after it has been flushed by a remote function call. I still see some instability when stressing the system with parallel kernel builds while creating memory pressure by writing to /proc/sys/vm/nr_hugepages, but this patch improves the stability significantly. Signed-off-by: NDave Kleikamp <dave.kleikamp@oracle.com> Acked-by: NBob Picco <bob.picco@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tushar Behera 提交于
Commit 75096579 ("lib: devres: Introduce devm_ioremap_resource()") introduced devm_ioremap_resource() and deprecated the use of devm_request_and_ioremap(). While at it, also remove the error message as devm_ioremap_resource() also prints similar error message. Signed-off-by: NTushar Behera <tushar.behera@linaro.org> CC: sparclinux@vger.kernel.org CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 bob picco 提交于
The Machine Description (MD) property "address-congruence-offset" is optional. According to the MD specification the value is assumed 0UL when not present. This caused early boot failure on T5. Signed-off-by: NBob Picco <bob.picco@oracle.com> CC: sparclinux@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Andreas Larsson 提交于
This enables interrupts for Leon before having the CPU enter power-down mode. Commit 87fa05ae, "sparc: Use generic idle loop", gets the CPU stuck on idle for Leon systems. On Leon, disabling interrupts and powering down the processor will get the processor stuck waiting for an interrupt that will never be reacted to. Signed-off-by: NAndreas Larsson <andreas@gaisler.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Andreas Larsson 提交于
This reduces the need from two timers to one timer. Moreover, without this patch, when the "ticker" timer triggers timer_cs_read via tick_periodic it reads the value of the usual timer it can get an wrapped timer value without timer_cs_internal_counter having been updated leading to the clock going backwards. This effectively hangs one cpu that gets stuck in update_wall_time with an offset slightly smaller than 0xffffffffffffffff. Signed-off-by: NAndreas Larsson <andreas@gaisler.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zhao Hongjiang 提交于
'boot_command_line' and 'full_boot_str' has a fix length, 'cmdline_p' and 'boot_command' maybe larger than them. So use strlcpy() instead of strcpy() to avoid memory overflow. Signed-off-by: NZhao Hongjiang <zhaohongjiang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Chen Gang 提交于
When "cp >= barg_buf + BARG_LEN-2", it breaks internel looping 'while', but outside loop 'for' still has effect, so "*cp++ = ' '" will continue repeating which may cause memory overflow. So need additional length check for it in the outside looping. Also beautify the related code which found by "./scripts/checkpatch.pl" Signed-off-by: NChen Gang <gang.chen@asianux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Denis Efremov 提交于
EXPORT_SYMBOL and inline directives are contradictory to each other. The patch fixes this inconsistency. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: NDenis Efremov <yefremov.denis@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Geert Uytterhoeven 提交于
Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Randy Dunlap 提交于
Fix kconfig warning and build errors on x86_64 by selecting BINFMT_ELF when COMPAT_BINFMT_ELF is being selected. warning: (IA32_EMULATION) selects COMPAT_BINFMT_ELF which has unmet direct dependencies (COMPAT && BINFMT_ELF) fs/built-in.o: In function `elf_core_dump': compat_binfmt_elf.c:(.text+0x3e093): undefined reference to `elf_core_extra_phdrs' compat_binfmt_elf.c:(.text+0x3ebcd): undefined reference to `elf_core_extra_data_size' compat_binfmt_elf.c:(.text+0x3eddd): undefined reference to `elf_core_write_extra_phdrs' compat_binfmt_elf.c:(.text+0x3f004): undefined reference to `elf_core_write_extra_data' [ hpa: This was sent to me for -next but it is a low risk build fix ] Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Link: http://lkml.kernel.org/r/51C0B614.5000708@infradead.org Cc: <stable@vger.kernel.org> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 John David Anglin 提交于
parisc: Use unshadowed index register for flush instructions in flush_dcache_page_asm and flush_icache_page_asm The comment at the start of pacache.S states that the base and index registers used for fdc,fic, and pdc instructions should not use shadowed registers. Although this is probably unnecessary for tmpalias flushes, there is also no reason not to comply. Signed-off-by: NJohn David Anglin <dave.anglin@bell.net> Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Thomas Bogendoerfer 提交于
pci_mmap_page_range() is needed for X11-server support on C8000 with ATI FireGL card. Signed-off-by Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Thomas Bogendoerfer 提交于
The C8000 workstation (64 bit kernel only) has a somewhat different serial port configuration than other models. Thomas Bogendoerfer sent a patch to fix this in September 2010, which was now minimally modified by me. Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Helge Deller 提交于
Make sure that we really return -1 (instead of 0x00ff) as node id for page frame numbers which are not physically available. This finally fixes the kernel panic when running cat /proc/kpageflags /proc/kpagecount. Theoretically this patch now limits the number of physical memory ranges to 127 instead of 254, but currently we have MAX_PHYSMEM_RANGES hardcoded to 8 which is sufficient for all existing parisc machines. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Yinghai Lu 提交于
Joshua reported: Commit cd7b304dfaf1 (x86, range: fix missing merge during add range) broke mtrr cleanup on his setup in 3.9.5. corresponding commit in upstream is fbe06b7b. *BAD*gran_size: 64K chunk_size: 16M num_reg: 6 lose cover RAM: -0G https://bugzilla.kernel.org/show_bug.cgi?id=59491 So it rejects new var mtrr layout. It turns out we have some problem with initial mtrr range retrieval. The current sequence is: x86_get_mtrr_mem_range ==> bunchs of add_range_with_merge ==> bunchs of subract_range ==> clean_sort_range add_range_with_merge for [0,1M) sort_range() add_range_with_merge could have blank slots, so we can not just sort only, that will have final result have extra blank slot in head. So move that calling add_range_with_merge for [0,1M), with that we could avoid extra clean_sort_range calling. Reported-by: NJoshua Covington <joshuacov@googlemail.com> Tested-by: NJoshua Covington <joshuacov@googlemail.com> Signed-off-by: NYinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1371154622-8929-2-git-send-email-yinghai@kernel.org Cc: <stable@vger.kernel.org> v3.9 Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 18 6月, 2013 3 次提交
-
-
由 Zhanghaoyu (A) 提交于
__kvm_set_xcr function does the CPL check when set xcr. __kvm_set_xcr is called in two flows, one is invoked by guest, call stack shown as below, handle_xsetbv(or xsetbv_interception) kvm_set_xcr __kvm_set_xcr the other one is invoked by host, for example during system reset: kvm_arch_vcpu_ioctl kvm_vcpu_ioctl_x86_set_xcrs __kvm_set_xcr The former does need the CPL check, but the latter does not. Cc: stable@vger.kernel.org Signed-off-by: NZhang Haoyu <haoyu.zhang@huawei.com> [Tweaks to commit message. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 David Daney 提交于
asm/kregs.h isn't always included first, so we need an explicit include. [Fix build breakage introduced by f21afc25 smp.h: Use local_irq_{save,restore}() in !SMP version of on_each_cpu().] Signed-off-by: NDavid Daney <david.daney@cavium.com> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Stephen Warren 提交于
Add comments to machine_shutdown()/halt()/power_off()/restart() that describe their purpose and/or requirements re: CPUs being active/not. In machine_shutdown(), replace the call to smp_send_stop() with a call to disable_nonboot_cpus(). This completely disables all but one CPU, thus satisfying the requirement that only a single CPU be active for kexec. Adjust Kconfig dependencies for this change. In machine_halt()/power_off()/restart(), call smp_send_stop() directly, rather than via machine_shutdown(); these functions don't need to completely de-activate all CPUs using hotplug, but rather just quiesce them. Remove smp_kill_cpus(), and its call from smp_send_stop(). smp_kill_cpus() was indirectly calling smp_ops.cpu_kill() without calling smp_ops.cpu_die() on the target CPUs first. At least some implementations of smp_ops had issues with this; it caused cpu_kill() to hang on Tegra, for example. Since smp_send_stop() is only used for shutdown, halt, and power-off, there is no need to attempt any kind of CPU hotplug here. Adjust Kconfig to reflect that machine_shutdown() (and hence kexec) relies upon disable_nonboot_cpus(). However, this alone doesn't guarantee that hotplug will work, or even that hotplug is implemented for a particular piece of HW that a multi-platform zImage runs on. Hence, add error-checking to machine_kexec() to determine whether it did work. Suggested-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NStephen Warren <swarren@nvidia.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NZhangfei Gao <zhangfei.gao@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 6月, 2013 5 次提交
-
-
由 Magnus Damm 提交于
Make sure hyp-stub.S gets removed during make distclean, this left over file was introduced in commit: 424e5994 ARM: zImage/virt: hyp mode entry support for the zImage loader Signed-off-by: NMagnus Damm <damm@opensource.se> Acked-by: NDave Martin <dave.martin@linaro.org> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Simon Baatz 提交于
Commit f8b63c18 made flush_kernel_dcache_page a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, continue to do lazy flushing if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: NSimon Baatz <gmbnomis@gmail.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # 3.2+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
This commit fixes the ID and mask for the PJ4B which was too restrictive and didn't match the CPU of the Armada 370 SoC. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Po-Yu Chuang 提交于
This bug was introduced in commit e651eab0. Some v4/v5 platforms failed to boot due to this. Signed-off-by: NPo-Yu Chuang <ratbert.chuang@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jon Medhurst 提交于
On Cortex-A9 before version r1p0, the LoUIS bit field of the CLIDR register returns zero when it should return one. This leads to cache maintenance operations which rely on this value to not function as intended, causing data corruption. The workaround for this errata is to detect affected CPUs and correct the LoUIS value read. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 6月, 2013 1 次提交
-
-
由 Chris Metcalf 提交于
gcc 4.7.x is emitting calls to __ffsdi2 where previously it used to inline the appropriate ctz instructions. While this needs to be fixed in gcc, it's also easy to avoid having it cause build failures when building with those compilers by exporting __ffsdi2 to modules. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> Cc: stable@kernel.org
-
- 15 6月, 2013 3 次提交
-
-
由 Benjamin Herrenschmidt 提交于
When replaying interrupts (as a result of the interrupt occurring while soft-disabled), in the case of the decrementer, we are exclusively testing for a pending timer target. However we also use decrementer interrupts to trigger the new "irq_work", which in this case would be missed. This change the logic to force a replay in both cases of a timer boundary reached and a decrementer interrupt having actually occurred while disabled. The former test is still useful to catch cases where a CPU having been hard-disabled for a long time completely misses the interrupt due to a decrementer rollover. CC: <stable@vger.kernel.org> [v3.4+] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Paul Mackerras 提交于
Normally, the kernel emulates a few instructions that are unimplemented on some processors (e.g. the old dcba instruction), or privileged (e.g. mfpvr). The emulation of unimplemented instructions is currently not working on the PowerNV platform. The reason is that on these machines, unimplemented and illegal instructions cause a hypervisor emulation assist interrupt, rather than a program interrupt as on older CPUs. Our vector for the emulation assist interrupt just calls program_check_exception() directly, without setting the bit in SRR1 that indicates an illegal instruction interrupt. This fixes it by making the emulation assist interrupt set that bit before calling program_check_interrupt(). With this, old programs that use no-longer implemented instructions such as dcba now work again. CC: <stable@vger.kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
It's possible for us to crash when running with ftrace enabled, eg: Bad kernel stack pointer bffffd12 at c00000000000a454 cpu 0x3: Vector: 300 (Data Access) at [c00000000ffe3d40] pc: c00000000000a454: resume_kernel+0x34/0x60 lr: c00000000000335c: performance_monitor_common+0x15c/0x180 sp: bffffd12 msr: 8000000000001032 dar: bffffd12 dsisr: 42000000 If we look at current's stack (paca->__current->stack) we see it is equal to c0000002ecab0000. Our stack is 16K, and comparing to paca->kstack (c0000002ecab3e30) we can see that we have overflowed our kernel stack. This leads to us writing over our struct thread_info, and in this case we have corrupted thread_info->flags and set _TIF_EMULATE_STACK_STORE. Dumping the stack we see: 3:mon> t c0000002ecab0000 [c0000002ecab0000] c00000000002131c .performance_monitor_exception+0x5c/0x70 [c0000002ecab0080] c00000000000335c performance_monitor_common+0x15c/0x180 --- Exception: f01 (Performance Monitor) at c0000000000fb2ec .trace_hardirqs_off+0x1c/0x30 [c0000002ecab0370] c00000000016fdb0 .trace_graph_entry+0xb0/0x280 (unreliable) [c0000002ecab0410] c00000000003d038 .prepare_ftrace_return+0x98/0x130 [c0000002ecab04b0] c00000000000a920 .ftrace_graph_caller+0x14/0x28 [c0000002ecab0520] c0000000000d6b58 .idle_cpu+0x18/0x90 [c0000002ecab05a0] c00000000000a934 .return_to_handler+0x0/0x34 [c0000002ecab0620] c00000000001e660 .timer_interrupt+0x160/0x300 [c0000002ecab06d0] c0000000000025dc decrementer_common+0x15c/0x180 --- Exception: 901 (Decrementer) at c0000000000104d4 .arch_local_irq_restore+0x74/0xa0 [c0000002ecab09c0] c0000000000fe044 .trace_hardirqs_on+0x14/0x30 (unreliable) [c0000002ecab0fb0] c00000000016fe3c .trace_graph_entry+0x13c/0x280 [c0000002ecab1050] c00000000003d038 .prepare_ftrace_return+0x98/0x130 [c0000002ecab10f0] c00000000000a920 .ftrace_graph_caller+0x14/0x28 [c0000002ecab1160] c0000000000161f0 .__ppc64_runlatch_on+0x10/0x40 [c0000002ecab11d0] c00000000000a934 .return_to_handler+0x0/0x34 --- Exception: 901 (Decrementer) at c0000000000104d4 .arch_local_irq_restore+0x74/0xa0 ... and so on __ppc64_runlatch_on() is called from RUNLATCH_ON in the exception entry path. At that point the irq state is not consistent, ie. interrupts are hard disabled (by the exception entry), but the paca soft-enabled flag may be out of sync. This leads to the local_irq_restore() in trace_graph_entry() actually enabling interrupts, which we do not want. Because we have not yet reprogrammed the decrementer we immediately take another decrementer exception, and recurse. The fix is twofold. Firstly make sure we call DISABLE_INTS before calling RUNLATCH_ON. The badly named DISABLE_INTS actually reconciles the irq state in the paca with the hardware, making it safe again to call local_irq_save/restore(). Although that should be sufficient to fix the bug, we also mark the runlatch routines as notrace. They are called very early in the exception entry and we are asking for trouble tracing them. They are also fairly uninteresting and tracing them just adds unnecessary overhead. [ This regression was introduced by fe1952fc "powerpc: Rework runlatch code" by myself --BenH ] CC: <stable@vger.kernel.org> [v3.4+] Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 6月, 2013 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The OF code uses irqsafe locks everywhere except in a handful of functions for no obvious reasons. Since the conversion from the old rwlocks, this now triggers lockdep warnings when used at interrupt time. At least one driver (ibmvscsi) seems to be doing that from softirq context. This converts the few non-irqsafe locks into irqsafe ones, making them consistent with the rest of the code. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 13 6月, 2013 2 次提交
-
-
由 Jussi Kivilinna 提交于
The new XTS code for aesni_intel uses input buffers directly as memory operands for pxor instructions, which causes crash if those buffers are not aligned to 16 bytes. Patch changes XTS code to handle unaligned memory correctly, by loading memory with movdqu instead. Reported-by: NDave Jones <davej@redhat.com> Tested-by: NDave Jones <davej@redhat.com> Signed-off-by: NJussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Kees Cook 提交于
Fixes a typo in register clearing code. Thanks to PaX Team for fixing this originally, and James Troup for pointing it out. Signed-off-by: NKees Cook <keescook@chromium.org> Link: http://lkml.kernel.org/r/20130605184718.GA8396@www.outflux.net Cc: <stable@vger.kernel.org> v2.6.30+ Cc: PaX Team <pageexec@freemail.hu> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-