- 17 12月, 2013 10 次提交
-
-
由 Lorenzo Pieralisi 提交于
When CPU idle is enabled, the architectural idle call should go through the idle subsystem to allow CPUs to enter idle states defined by the platform CPU idle back-end operations. This patch, mirroring other archs behaviour, adds the CPU idle call to the architectural arch_cpu_idle implementation for arm64. Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
On platforms with power management capabilities, timers that are shut down when a CPU enters deep C-states must be emulated using an always-on timer and a timer IPI to relay the timer IRQ to target CPUs on an SMP system. This patch enables the generic clockevents broadcast infrastructure for arm64, by providing the required Kconfig entries and adding the timer IPI infrastructure. Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
When a CPU is shutdown either through CPU idle or suspend to RAM, the content of HW breakpoint registers must be reset or restored to proper values when CPU resume from low power states. This patch adds debug register restore operations to the HW breakpoint control function and implements a CPU PM notifier that allows to restore the content of HW breakpoint registers to allow proper suspend/resume operations. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
Most of the code executed to install and uninstall breakpoints is common and can be factored out in a function that through a runtime operations type provides the requested implementation. This patch creates a common function that can be used to install/uninstall breakpoints and defines the set of operations that can be carried out through it. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
Upon CPU shutdown and consequent warm-reboot, the hypervisor CPU state must be re-initialized. This patch implements a CPU PM notifier that upon warm-boot calls a KVM hook to reinitialize properly the hypervisor state so that the CPU can be safely resumed. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
When a CPU enters a low power state, its FP register content is lost. This patch adds a notifier to save the FP context on CPU shutdown and restore it on CPU resume. The context is saved and restored only if the suspending thread is not a kernel thread, mirroring the current context switch behaviour. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
Kernel subsystems like CPU idle and suspend to RAM require a generic mechanism to suspend a processor, save its context and put it into a quiescent state. The cpu_{suspend}/{resume} implementation provides such a framework through a kernel interface allowing to save/restore registers, flush the context to DRAM and suspend/resume to/from low-power states where processor context may be lost. The CPU suspend implementation relies on the suspend protocol registered in CPU operations to carry out a suspend request after context is saved and flushed to DRAM. The cpu_suspend interface: int cpu_suspend(unsigned long arg); allows to pass an opaque parameter that is handed over to the suspend CPU operations back-end so that it can take action according to the semantics attached to it. The arg parameter allows suspend to RAM and CPU idle drivers to communicate to suspend protocol back-ends; it requires standardization so that the interface can be reused seamlessly across systems, paving the way for generic drivers. Context memory is allocated on the stack, whose address is stashed in a per-cpu variable to keep track of it and passed to core functions that save/restore the registers required by the architecture. Even though, upon successful execution, the cpu_suspend function shuts down the suspending processor, the warm boot resume mechanism, based on the cpu_resume function, makes the resume path operate as a cpu_suspend function return, so that cpu_suspend can be treated as a C function by the caller, which simplifies coding the PM drivers that rely on the cpu_suspend API. Upon context save, the minimal amount of memory is flushed to DRAM so that it can be retrieved when the MMU is off and caches are not searched. The suspend CPU operation, depending on the required operations (eg CPU vs Cluster shutdown) is in charge of flushing the cache hierarchy either implicitly (by calling firmware implementations like PSCI) or explicitly by executing the required cache maintainance functions. Debug exceptions are disabled during cpu_{suspend}/{resume} operations so that debug registers can be saved and restored properly preventing preemption from debug agents enabled in the kernel. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
Power management software requires the kernel to save and restore CPU registers while going through suspend and resume operations triggered by kernel subsystems like CPU idle and suspend to RAM. This patch implements code that provides save and restore mechanism for the arm v8 implementation. Memory for the context is passed as parameter to both cpu_do_suspend and cpu_do_resume functions, and allows the callers to implement context allocation as they deem fit. The registers that are saved and restored correspond to the registers set actually required by the kernel to be up and running which represents a subset of v8 ISA. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
On ARM64 SMP systems, cores are identified by their MPIDR_EL1 register. The MPIDR_EL1 guidelines in the ARM ARM do not provide strict enforcement of MPIDR_EL1 layout, only recommendations that, if followed, split the MPIDR_EL1 on ARM 64 bit platforms in four affinity levels. In multi-cluster systems like big.LITTLE, if the affinity guidelines are followed, the MPIDR_EL1 can not be considered a linear index. This means that the association between logical CPU in the kernel and the HW CPU identifier becomes somewhat more complicated requiring methods like hashing to associate a given MPIDR_EL1 to a CPU logical index, in order for the look-up to be carried out in an efficient and scalable way. This patch provides a function in the kernel that starting from the cpu_logical_map, implement collision-free hashing of MPIDR_EL1 values by checking all significative bits of MPIDR_EL1 affinity level bitfields. The hashing can then be carried out through bits shifting and ORing; the resulting hash algorithm is a collision-free though not minimal hash that can be executed with few assembly instructions. The mpidr_el1 is filtered through a mpidr mask that is built by checking all bits that toggle in the set of MPIDR_EL1s corresponding to possible CPUs. Bits that do not toggle do not carry information so they do not contribute to the resulting hash. Pseudo code: /* check all bits that toggle, so they are required */ for (i = 1, mpidr_el1_mask = 0; i < num_possible_cpus(); i++) mpidr_el1_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0)); /* * Build shifts to be applied to aff0, aff1, aff2, aff3 values to hash the * mpidr_el1 * fls() returns the last bit set in a word, 0 if none * ffs() returns the first bit set in a word, 0 if none */ fs0 = mpidr_el1_mask[7:0] ? ffs(mpidr_el1_mask[7:0]) - 1 : 0; fs1 = mpidr_el1_mask[15:8] ? ffs(mpidr_el1_mask[15:8]) - 1 : 0; fs2 = mpidr_el1_mask[23:16] ? ffs(mpidr_el1_mask[23:16]) - 1 : 0; fs3 = mpidr_el1_mask[39:32] ? ffs(mpidr_el1_mask[39:32]) - 1 : 0; ls0 = fls(mpidr_el1_mask[7:0]); ls1 = fls(mpidr_el1_mask[15:8]); ls2 = fls(mpidr_el1_mask[23:16]); ls3 = fls(mpidr_el1_mask[39:32]); bits0 = ls0 - fs0; bits1 = ls1 - fs1; bits2 = ls2 - fs2; bits3 = ls3 - fs3; aff0_shift = fs0; aff1_shift = 8 + fs1 - bits0; aff2_shift = 16 + fs2 - (bits0 + bits1); aff3_shift = 32 + fs3 - (bits0 + bits1 + bits2); u32 hash(u64 mpidr_el1) { u32 l[4]; u64 mpidr_el1_masked = mpidr_el1 & mpidr_el1_mask; l[0] = mpidr_el1_masked & 0xff; l[1] = mpidr_el1_masked & 0xff00; l[2] = mpidr_el1_masked & 0xff0000; l[3] = mpidr_el1_masked & 0xff00000000; return (l[0] >> aff0_shift | l[1] >> aff1_shift | l[2] >> aff2_shift | l[3] >> aff3_shift); } The hashing algorithm relies on the inherent properties set in the ARM ARM recommendations for the MPIDR_EL1. Exotic configurations, where for instance the MPIDR_EL1 values at a given affinity level have large holes, can end up requiring big hash tables since the compression of values that can be achieved through shifting is somewhat crippled when holes are present. Kernel warns if the number of buckets of the resulting hash table exceeds the number of possible CPUs by a factor of 4, which is a symptom of a very sparse HW MPIDR_EL1 configuration. The hash algorithm is quite simple and can easily be implemented in assembly code, to be used in code paths where the kernel virtual address space is not set-up (ie cpu_resume) and instruction and data fetches are strongly ordered so code must be compact and must carry out few data accesses. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
由 Lorenzo Pieralisi 提交于
In order to simplify access to different affinity levels within the MPIDR_EL1 register values, this patch implements some preprocessor macros that allow to retrieve the MPIDR_EL1 affinity level value according to the level passed as input parameter. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- 14 12月, 2013 1 次提交
-
-
由 Russell King 提交于
Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is not defined: In file included from arch/arm/include/asm/page.h:163:0, from include/linux/mm_types.h:16, from include/linux/sched.h:24, from arch/arm/kernel/asm-offsets.c:13: arch/arm/include/asm/memory.h: In function '__virt_to_phys': arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function) arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in arch/arm/include/asm/memory.h: In function '__phys_to_virt': arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function) Fixes: ca5a45c0 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions") Tested-By: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 12月, 2013 3 次提交
-
-
由 Gleb Natapov 提交于
A guest can cause a BUG_ON() leading to a host kernel crash. When the guest writes to the ICR to request an IPI, while in x2apic mode the following things happen, the destination is read from ICR2, which is a register that the guest can control. kvm_irq_delivery_to_apic_fast uses the high 16 bits of ICR2 as the cluster id. A BUG_ON is triggered, which is a protection against accessing map->logical_map with an out-of-bounds access and manages to avoid that anything really unsafe occurs. The logic in the code is correct from real HW point of view. The problem is that KVM supports only one cluster with ID 0 in clustered mode, but the code that has the bug does not take this into account. Reported-by: NLars Bull <larsbull@google.com> Cc: stable@vger.kernel.org Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andy Honig 提交于
In kvm_lapic_sync_from_vapic and kvm_lapic_sync_to_vapic there is the potential to corrupt kernel memory if userspace provides an address that is at the end of a page. This patches concerts those functions to use kvm_write_guest_cached and kvm_read_guest_cached. It also checks the vapic_address specified by userspace during ioctl processing and returns an error to userspace if the address is not a valid GPA. This is generally not guest triggerable, because the required write is done by firmware that runs before the guest. Also, it only affects AMD processors and oldish Intel that do not have the FlexPriority feature (unless you disable FlexPriority, of course; then newer processors are also affected). Fixes: b93463aa ('KVM: Accelerated apic support') Reported-by: NAndrew Honig <ahonig@google.com> Cc: stable@vger.kernel.org Signed-off-by: NAndrew Honig <ahonig@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Andy Honig 提交于
Under guest controllable circumstances apic_get_tmcct will execute a divide by zero and cause a crash. If the guest cpuid support tsc deadline timers and performs the following sequence of requests the host will crash. - Set the mode to periodic - Set the TMICT to 0 - Set the mode bits to 11 (neither periodic, nor one shot, nor tsc deadline) - Set the TMICT to non-zero. Then the lapic_timer.period will be 0, but the TMICT will not be. If the guest then reads from the TMCCT then the host will perform a divide by 0. This patch ensures that if the lapic_timer.period is 0, then the division does not occur. Reported-by: NAndrew Honig <ahonig@google.com> Cc: stable@vger.kernel.org Signed-off-by: NAndrew Honig <ahonig@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 12 12月, 2013 5 次提交
-
-
由 Maxime Ripard 提交于
The Allwinner A31 uses the ARM GIC as its internal interrupts controller. The GIC can work on several interrupt triggers, and the A31 was actually setting it up to use a rising edge as a trigger, while it was actually a level high trigger, leading to some interrupts that would be completely ignored if the edge was missed. Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Cc: stable@vger.kernel.org # 3.12+ Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Maxime Ripard 提交于
The Allwinner A20 uses the ARM GIC as its internal interrupts controller. The GIC can work on several interrupt triggers, and the A20 was actually setting it up to use a rising edge as a trigger, while it was actually a level high trigger, leading to some interrupts that would be completely ignored if the edge was missed. Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Cc: stable@vger.kernel.org #3.12+ Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Stephen Warren 提交于
Add a missing break to the switch in tegra_init_fuse() which determines which SoC the code is running on. This prevents the Tegra30+ fuse handling code from running on Tegra20. Fixes: 3bd1ae57 ("ARM: tegra: add fuses as device randomness") Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Sergei Ianovich 提交于
Erratum 71 of PXA270M Processor Family Specification Update (April 19, 2010) explains that watchdog reset time is just 8us insead of 10ms in EMTS. If SDRAM is not reset, it causes memory bus congestion and the device hangs. We put SDRAM in selfresh mode before watchdog reset, removing potential freezes. Without this patch PXA270-based ICP DAS LP-8x4x hangs after up to 40 reboots. With this patch it has successfully rebooted 500 times. Signed-off-by: NSergei Ianovich <ynvich@gmail.com> Tested-by: NMarek Vasut <marex@denx.de> Signed-off-by: NHaojian Zhuang <haojian.zhuang@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Dmitry Eremin-Solenikov 提交于
When converting from tosa-keyboard driver to matrix keyboard, tosa keys received extra 1 column shift. Replace that with correct values to make keyboard work again. Fixes: f69a6548 ('[ARM] pxa/tosa: make use of the matrix keypad driver') Signed-off-by: NDmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: NHaojian Zhuang <haojian.zhuang@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 11 12月, 2013 2 次提交
-
-
由 Matthew Garrett 提交于
UEFI time services are often broken once we're in virtual mode. We were already refusing to use them on 64-bit systems, but it turns out that they're also broken on some 32-bit firmware, including the Dell Venue. Disable them for now, we can revisit once we have the 1:1 mappings code incorporated. Signed-off-by: NMatthew Garrett <matthew.garrett@nebula.com> Link: http://lkml.kernel.org/r/1385754283-2464-1-git-send-email-matthew.garrett@nebula.com Cc: <stable@vger.kernel.org> Cc: Matt Fleming <matt.fleming@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
由 Nishanth Menon 提交于
Due to the cross dependencies between hwmod for automanaged device information for OMAP and dts node definitions, we can run into scenarios where the dts node is defined, however it's hwmod entry is yet to be added. In these cases: a) omap_device does not register a pm_domain (since it cannot find hwmod entry). b) driver does not know about (a), does a pm_runtime_get_sync which never fails c) It then tries to do some operation on the device (such as read the revision register (as part of probe) without clock or adequate OMAP generic PM operation performed for enabling the module. This causes a crash such as that reported in: https://bugzilla.kernel.org/show_bug.cgi?id=66441 When 'ti,hwmod' is provided in dt node, it is expected that the device will not function without the OMAP's power automanagement. Hence, when we hit a fail condition (due to hwmod entries not present or other similar scenario), fail at pm_domain level due to lack of data, provide enough information for it to be fixed, however, it allows for the driver to take appropriate measures to prevent crash. Reported-by: NTobias Jakobi <tjakobi@math.uni-bielefeld.de> Signed-off-by: NNishanth Menon <nm@ti.com> Acked-by: NKevin Hilman <khilman@linaro.org> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NKevin Hilman <khilman@linaro.org>
-
- 10 12月, 2013 19 次提交
-
-
由 cpw 提交于
The SGI UV tlb shootdown code panics the system with a NULL pointer deference if 'nobau' is specified on the boot commandline. uv_flush_tlb_other() gets called for every flush, whether the BAU is disabled or not. It should not be keeping the s_enters statistic while the BAU is disabled. The panic occurs because during initialization init_per_cpu_tunables() does not set the bcp->statp pointer if 'nobau' was specified. Signed-off-by: NCliff Wickman <cpw@sgi.com> Cc: <stable@vger.kernel.org> # 3.12.x Link: http://lkml.kernel.org/r/E1VnzBi-0005yF-MU@eag09.americas.sgi.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Paul Walmsley 提交于
Treat both negative and zero return values from clk_round_rate() as errors. This is needed since subsequent patches will convert clk_round_rate()'s return value to be an unsigned type, rather than a signed type, since some clock sources can generate rates higher than (2^31)-1 Hz. Eventually, when calling clk_round_rate(), only a return value of zero will be considered a error. All other values will be considered valid rates. The comparison against values less than 0 is kept to preserve the correct behavior in the meantime. Signed-off-by: NPaul Walmsley <paul@pwsan.com> Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Cc: Håvard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Acked-by: NHans-Christian Egtvedt <egtvedt@samfundet.no>
-
由 Michael Opdenacker 提交于
This patch proposes to remove the use of the IRQF_DISABLED flag It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Acked-by: NHans-Christian Egtvedt <egtvedt@samfundet.no>
-
由 Matthias Brugger 提交于
The power management has a section mismatch which leads to the following warning during compilation: WARNING: arch/avr32/mach-at32ap/built-in.o(.text+0x16d4): Section mismatch in reference from the function avr32_pm_offset() to the function .init.text:pm_exception() The function avr32_pm_offset() references the function __init pm_exception(). Signed-off-by: NMatthias Brugger <matthias.bgg@gmail.com> Acked-by: NHans-Christian Egtvedt <hegtvedt@cisco.com>
-
由 Eunbong Song 提交于
This patch removes CONFIG_MTD_PARTITIONS in config files for avr32. Because CONFIG_MTD_PARTITIONS was removed by commit 6a8a98b2. Signed-off-by: NEunbong Song <eunb.song@samsung.com> Acked-by: NHans-Christian Egtvedt <hegtvedt@cisco.com>
-
由 Mahesh Salgaonkar 提交于
The current logic sets the kdump base to min of 2G or ppc64_rma_size/2. On PowerNV kernel the first memory block 'memory@0' can be very large, equal to the DIMM size with ppc64_rma_size value capped to 1G. Hence on PowerNV, kdump base is set to 512M resulting kdump to fail while allocating paca array. This is because, paca need its memory from RMA region capped at 256M (see allocate_pacas()). This patch lowers the kdump base cap to 128M so that kdump kernel can successfully get memory below 256M for paca allocation. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
I have recently found out that no iommu_groups could be found under /sys/ on a P8. That prevents PCI passthrough from working. During my investigation, I found out there seems to be a missing iommu_register_group for PHB3. The following patch seems to fix the problem. After applying it, I see iommu_groups under /sys/kernel/iommu_groups/, and can also bind vfio-pci to an adapter, which gives me a device at /dev/vfio/. Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anatolij Gustschin 提交于
The bestcomm driver has been moved to drivers/dma, so to select this driver by default additionally CONFIG_DMADEVICES has to be enabled. Currently it is not enabled in the config despite existing CONFIG_PPC_BESTCOMM=y in the config files. Fix it. Signed-off-by: NAnatolij Gustschin <agust@denx.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Olof Johansson 提交于
At least some distros expect it these days; turn it on. Also, random churn from doing a savedefconfig for the first time in a year or so. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Hong H. Pham 提交于
In pte_alloc_one(), pgtable_page_ctor() is passed an address that has not been converted by page_address() to the newly allocated PTE page. When the PTE is freed, __pte_free_tlb() calls pgtable_page_dtor() with an address to the PTE page that has been converted by page_address(). The mismatch in the PTE's page address causes pgtable_page_dtor() to access invalid memory, so resources for that PTE (such as the page lock) is not properly cleaned up. On PPC32, only SMP kernels are affected. On PPC64, only SMP kernels with 4K page size are affected. This bug was introduced by commit d614bb04 "powerpc: Move the pte free routines from common header". On a preempt-rt kernel, a spinlock is dynamically allocated for each PTE in pgtable_page_ctor(). When the PTE is freed, calling pgtable_page_dtor() with a mismatched page address causes a memory leak, as the pointer to the PTE's spinlock is bogus. On mainline, there isn't any immediately obvious symptoms, but the problem still exists here. Fixes: d614bb04 "powerpc: Move the pte free routes from common header" Cc: Paul Mackerras <paulus@samba.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux-stable <stable@vger.kernel.org> # v3.10+ Signed-off-by: NHong H. Pham <hong.pham@windriver.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ilia Mirkin 提交于
Allocate enough memory for the ocm_block structure, not just a pointer to it. Signed-off-by: NIlia Mirkin <imirkin@alum.mit.edu> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
A kernel configured with PPC_EARLY_DEBUG_BOOTX=y but PPC_PMAC=n and PPC_MAPLE=n will fail to link: btext.c:(.text+0x2d0fc): undefined reference to `.rmci_off' btext.c:(.text+0x2d214): undefined reference to `.rmci_on' Fix it by making the build of rmci_on/off() depend on PPC_EARLY_DEBUG_BOOTX, which also enable the only code that uses them. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 H. Peter Anvin 提交于
In checkin 5551a34e x86-64, build: Always pass in -mno-sse we unconditionally added -mno-sse to the main build, to keep newer compilers from generating SSE instructions from autovectorization. However, this did not extend to the special environments (arch/x86/boot, arch/x86/boot/compressed, and arch/x86/realmode/rm). Add -mno-sse to the compiler command line for these environments, and add -mno-mmx to all the environments as well, as we don't want a compiler to generate MMX code either. This patch also removes a $(cc-option) call for -m32, since we have long since stopped supporting compilers too old for the -m32 option, and in fact hardcode it in other places in the Makefiles. Reported-by: NKevin B. Smith <kevin.b.smith@intel.com> Cc: Sunil K. Pandey <sunil.k.pandey@intel.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Cc: H. J. Lu <hjl.tools@gmail.com> Link: http://lkml.kernel.org/n/tip-j21wzqv790q834n7yc6g80j1@git.kernel.org Cc: <stable@vger.kernel.org> # build fix only
-
由 Jon Medhurst 提交于
The __do_cache_op function operates with a 'chunk' size of one page but fails to limit the size of the final chunk so as to not exceed the specified memory region. Fix this. Cc: <stable@vger.kernel.org> Reported-by: NChristian Gmeiner <christian.gmeiner@gmail.com> Tested-by: NChristian Gmeiner <christian.gmeiner@gmail.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Konstantin Khlebnikov 提交于
This patch fixes corner case when (fp + 4) overflows unsigned long, for example: fp = 0xFFFFFFFF -> fp + 4 == 3. Cc: <stable@vger.kernel.org> Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Konstantin Khlebnikov 提交于
get_wchan() is lockless. Task may wakeup at any time and change its own stack, thus each next stack frame may be overwritten and filled with random stuff. /proc/$pid/stack interface had been disabled for non-current tasks, see [1] But 'wchan' still allows to trigger stack frame unwinding on volatile stack. This patch fixes oops in unwind_frame() by adding stack pointer validation on each step (as x86 code do), unwind_frame() already checks frame pointer. Also I've found another report of this oops on stackoverflow (irony). Link: http://www.spinics.net/lists/arm-kernel/msg110589.html [1] Link: http://stackoverflow.com/questions/18479894/unwind-frame-cause-a-kernel-paging-error Cc: <stable@vger.kernel.org> Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Santosh Shilimkar 提交于
To get updated __pv_phys_offset, setup_dma_zone() needs to be called after early_paging_init(). Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Santosh Shilimkar 提交于
Current code is using PHYS_OFFSET to calculate the arm_dma_limit which will lead to wrong calculations in cases where PHYS_OFFSET is updated runtime. So fix the code by using __pv_phys_offset instead of PHYS_OFFSET. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Peter reports that OMAP audio broke with the recent fix for these checks, caused by OMAP audio using a 64-bit DMA mask. We should allow 64-bit DMA masks even with 32-bit dma_addr_t if we can be sure the amount of RAM we have won't allow the 32-bit dma_addr_t to overflow. Unfortunately, the checks to detect overflow were not correct. Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-