- 17 5月, 2012 2 次提交
-
-
由 Will Deacon 提交于
Commit ff9a184c ("ARM: 7400/1: vfp: clear fpscr length and stride bits on entry to sig handler") flushes the VFP state prior to entering a signal handler so that a VFP operation inside the handler will trap and force a restore of ABI-compliant registers. Reflushing and disabling VFP on the sigreturn path is predicated on the saved thread state indicating that VFP was used by the handler -- however for SMP platforms this is only set on context-switch, making the check unreliable and causing VFP register corruption in userspace since the register values are not necessarily those restored from the sigframe. This patch unconditionally flushes the VFP state after a signal handler. Since we already perform the flush before the handler and the flushing itself happens lazily, the redundant flush when VFP is not used by the handler is essentially a nop. Reported-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NJon Medhurst <tixy@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Vitaly Andrianov 提交于
A zero value for prot_sect in the memory types table implies that section mappings should never be created for the memory type in question. This is checked for in alloc_init_section(). With LPAE, we set a bit to mask access flag faults for kernel mappings. This breaks the aforementioned (!prot_sect) check in alloc_init_section(). This patch fixes this bug by first checking for a non-zero prot_sect before setting the PMD_SECT_AF flag. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 5月, 2012 3 次提交
-
-
由 Russell King 提交于
No one uses the per-hw list of buses, so get rid of this. Instead, build the list locally. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Use sys->private_data to store the PCIe port data structure. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Cc: <stable@vger.kernel.org> Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 5月, 2012 4 次提交
-
-
由 Russell King 提交于
Most PCI implementations perform simple root bus scanning. Rather than having each group of platforms provide a duplicated bus scan function, provide the PCI configuration ops structure via the hw_pci structure, and call the root bus scanning function from core ARM PCI code. Acked-by: NKrzysztof Hałasa <khc@pm.waw.pl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Most PCI implementations use the standard PCI swizzle function, which handles the well defined behaviour of PCI-to-PCI bridges which can be found on cards (eg, four port ethernet cards.) Rather than having almost every platform specify the standard swizzle function, make this the default when no swizzle function is supplied. Therefore, a swizzle function only needs to be provided when there is something exceptional which needs to be handled. This gets rid of the swizzle initializer from 47 files, and leaves us with just two platforms specifying a swizzle function: ARM Integrator and Chalice CATS. Acked-by: NKrzysztof Hałasa <khc@pm.waw.pl> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This is at odds with the documentation in the file; it says pin 1 on slots 24,25,26,27 map to IRQs 27,28,29,30, but the function will always be entered with slot=0 due to the lack of swizzle function. Fix this function to behave as the comments say, and use the standard PCI swizzle. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The Integrator swizzle function is almost the same as the standard PCI swizzle, except for an initial check for pin = 0. Make the integrator swizzle function a wrapper around the standard PCI swizzle function so we preseve this behaviour while using common code. [fix to use pci_std_swizzle from Linus Walleij] Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 5月, 2012 3 次提交
-
-
由 Catalin Marinas 提交于
These have already been removed from the classic MMU in favour of L_PTE_MT_* macros. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The vfp_enable function enables access to the VFP co-processor register space (cp10 and cp11) on the current CPU and must be called with preemption disabled. Unfortunately, the vfp_init late initcall does not disable preemption and can lead to an oops during boot if thread migration occurs at the wrong time and we end up attempting to access the FPSID on a CPU with VFP access disabled. This patch fixes the initcall to call vfp_enable from a non-preemptible context on each CPU and adds a BUG_ON(preemptible) to ensure that any similar problems are easily spotted in the future. Cc: stable@vger.kernel.org Reported-by: NHyungwoo Yang <hwoo.yang@gmail.com> Signed-off-by: NHyungwoo Yang <hyungwooy@nvidia.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
This is mainly to get rid of the "vfp_pm_suspend: saving vfp state" message flooding the kernel message ring by default. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 5月, 2012 1 次提交
-
-
由 Russell King 提交于
We setup identity MMU mappings across the entire 4GB of space, which are permissionless because the domain is set to manager. This unfortunately allows ARMv6 and later CPUs to speculatively prefetch from the entire address space, which can cause undesirable side effects if those regions contain devices. As we setup the mappings with read/write permission, we can switch the domain to client mode, and then use the XN bit for ARMv6 and above to control speculative prefetch to non-RAM areas. Reported-by: NR Sricharan <r.sricharan@ti.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 5月, 2012 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 5月, 2012 2 次提交
-
-
由 Colin Cross 提交于
Commit 4e8ee7de (ARM: SMP: use idmap_pgd for mapping MMU enable during secondary booting) switched secondary boot to use idmap_pgd, which is initialized during early_initcall, instead of a page table initialized during __cpu_up. This causes idmap_pgd to contain the static mappings but be missing all dynamic mappings. If a console is registered that creates a dynamic mapping, the printk in secondary_start_kernel will trigger a data abort on the missing mapping before the exception handlers have been initialized, leading to a hang. Initial boot is not affected because no consoles have been registered, and resume is usually not affected because the offending console is suspended. Onlining a cpu with hotplug triggers the problem. A workaround is to the printk in secondary_start_kernel until after the page tables have been switched back to init_mm. Cc: <stable@vger.kernel.org> Signed-off-by: NColin Cross <ccross@android.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Marc Zyngier 提交于
At the moment, read_persistent_clock is implemented at the platform level, which makes it impossible to compile these platforms in a single kernel. Implement these two functions at the architecture level, and provide a thin registration interface for both read_boot_clock and read_persistent_clock. The two affected platforms (OMAP and Tegra) are converted at the same time. Reported-by: NJeff Ohlstein <johlstei@codeaurora.org> Tested-by: NStephen Warren <swarren@wwwdotorg.org> Tested-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 5月, 2012 4 次提交
-
-
由 Will Deacon 提交于
The machine endianness has no direct correspondence to the syscall ABI, so use only AUDIT_ARCH_ARM when identifying the ABI to the audit tools in userspace. Cc: stable@vger.kernel.org Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The ARM audit code incorrectly uses the saved application ip register value to infer syscall entry or exit. Additionally, the saved value will be clobbered if the current task is not being traced, which can lead to libc corruption if ip is live (apparently glibc uses it for the TLS pointer). This patch fixes the syscall tracing code so that the why parameter is used to infer the syscall direction and the saved ip is only updated if we know that we will be signalling a ptrace trap. Reported-and-Tested-by: NJon Masters <jcm@jonmasters.org> Cc: stable@vger.kernel.org Cc: Eric Paris <eparis@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Tim Bird 提交于
The inline assembly in kernel_execve() uses r8 and r9. Since this code sequence does not return, it usually doesn't matter if the register clobber list is accurate. However, I saw a case where a particular version of gcc used r8 as an intermediate for the value eventually passed to r9. Because r8 is used in the inline assembly, and not mentioned in the clobber list, r9 was set to an incorrect value. This resulted in a kernel panic on execution of the first user-space program in the system. r9 is used in ret_to_user as the thread_info pointer, and if it's wrong, bad things happen. Cc: <stable@vger.kernel.org> Signed-off-by: NTim Bird <tim.bird@am.sony.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This patch removes support for ARMv3 CPUs, which haven't worked properly for quite some time (see the FIXME comment in arch/arm/mm/fault.c). The only V3 parts left is the cache model for ARMv3, which is needed for some odd reason by ARM740T CPUs, and being able to build with -march=armv3, which is required for the RiscPC platform due to its bus structure. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 5月, 2012 3 次提交
-
-
由 Nicolas Pitre 提交于
There is just no point mapping up to 512MB for a serial port. Using a single 1MB entry is way sufficient for all users. This will create less interference for the following debugging patch. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Walleij 提交于
The MMCI and PL022 SPI drivers sure need their platform data to work on the Versatile as well. (This does not fix the auxdata for MMCI instance mmc1 on the Versatile PB though.) Cc: Niklas Hernaeus <niklas.hernaeus@linaro.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Walleij 提交于
This does two things to the FPGA IRQ controller in the versatile family: - Convert to MULTI_IRQ_HANDLER so we can drop the entry macro from the Integrator. The C IRQ handler was inspired from arch/arm/common/vic.c, recent bug discovered in this handler was accounted for. - Convert to using IRQ domains so we can get rid of the NO_IRQ mess and proceed with device tree and such stuff. As part of the exercise, bump all the low IRQ numbers on the Integrator PIC to start from 1 rather than 0, since IRQ 0 is now NO_IRQ. The Linux IRQ numbers are thus entirely decoupled from the hardware IRQ numbers in this controller. I was unable to split this patch. The main reason is the half-done conversion to device tree in Versatile. Tested on Integrator/AP and Integrator/CP. Cc: Grant Likely <grant.likely@secretlab.ca> Acked-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 5月, 2012 2 次提交
-
-
由 Will Deacon 提交于
The cacheflush syscall can fail for two reasons: (1) The arguments are invalid (nonsensical address range or no VMA) (2) The region generates a translation fault on a VIPT or PIPT cache This patch allows do_cache_op to return an error code to userspace in the case of the above. The various coherent_user_range implementations are modified to return 0 in the case of VIVT caches or -EFAULT in the case of an abort on v6/v7 cores. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Dima Zavin 提交于
We can't be holding the mmap_sem while calling flush_cache_user_range because the flush can fault. If we fault on a user address, the page fault handler will try to take mmap_sem again. Since both places acquire the read lock, most of the time it succeeds. However, if another thread tries to acquire the write lock on the mmap_sem (e.g. mmap) in between the call to flush_cache_user_range and the fault, the down_read in do_page_fault will deadlock. [will: removed drop of vma parameter as already queued by rmk (7365/1)] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NDima Zavin <dima@android.com> Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 4月, 2012 3 次提交
-
-
由 Gavin Shan 提交于
Recently, Ryan Wang tried to compile PPC pSeries platform without CONFIG_EEH and eventually run into errors. Nishanth Aravamudan helped to narrow down the root cause. Actually, the pSeries platform depends on CONFIG_EEH heavily and that won't work properly without EEH support. According to Ben's suggestion, the patch make CONFIG_EEH invisible and keep it as always selected on pSeries platform. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Grant Likely 提交于
The switch from using irq_map to irq_alloc_desc*() for managing irq number allocations introduced new bugs in some of the powerpc interrupt code. Several functions rely on the value of NR_IRQS to determine the maximum irq number that could get allocated. However, with sparse_irq and using irq_alloc_desc*() the maximum possible irq number is now specified with 'nr_irqs' which may be a number larger than NR_IRQS. This has caused breakage on powermac when CONFIG_NR_IRQS is set to 32. This patch removes most of the direct references to NR_IRQS in the powerpc code and replaces them with either a nr_irqs reference or by using the common for_each_irq_desc() macro. The powerpc-specific for_each_irq() macro is removed at the same time. Also, the Cell axon_msi driver is refactored to remove the global build assumption on the size of NR_IRQS and instead add a limit to the maximum irq number when calling irq_domain_add_nomap(). Signed-off-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Grant Likely 提交于
The mpc8xx driver uses a reference to NR_IRQS that is buggy. It uses NR_IRQs for the array size of the ppc_cached_irq_mask bitmap, but NR_IRQs could be smaller than the number of hardware irqs that ppc_cached_irq_mask tracks. Also, while fixing that problem, it became apparent that the interrupt controller only supports 32 interrupt numbers, but it is written as if it supports multiple register banks which is more complicated. This patch pulls out the buggy reference to NR_IRQs and fixes the size of the ppc_cached_irq_mask to match the number of HW irqs. It also drops the now-unnecessary code since ppc_cached_irq_mask is no longer an array. Signed-off-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 4月, 2012 2 次提交
-
-
由 Will Deacon 提交于
The cmpxchg64 routines for ARMv6+ CPUs replicate inline assembly that already exists for atomic64 operations. Furthermore, the cmpxchg64 code uses the "memory" constraint in the clobber list rather than identifying the region of memory that is actually modified. This patch replaces the ARMv6+ cmpxchg64 code with macros that expand to the atomic64_ and local64_ variants, casting the pointer parameter to the appropriate container type. Cc: Nicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
scu_power_mode changes the power mode for the current CPU, which it determines from smp_processor_id(). However, this assumes that the physical CPU number is equal to Linux's logical CPU number and if this is not true, we will power off the wrong CPU. This patch uses cpu_logical_map to translate the logical CPU number into a physical one in scu_power_mode. Reported-by: NLorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 4月, 2012 5 次提交
-
-
由 Will Deacon 提交于
When a CPU is hotplugged off, we migrate any IRQs currently affine to it away and onto another online CPU by calling the irq_set_affinity function of the relevant interrupt controller chip. This function returns either IRQ_SET_MASK_OK or IRQ_SET_MASK_OK_NOCOPY, to indicate whether irq_data.affinity was updated. If we are forcefully migrating an interrupt (because the affinity mask no longer identifies any online CPUs) then we should update the IRQ affinity mask to reflect the new CPU set. Failure to do so can potentially leave /proc/irq/n/smp_affinity identifying only offline CPUs, which may confuse userspace IRQ balancing daemons. This patch updates migrate_one_irq to copy the affinity mask when the interrupt chip returns IRQ_SET_MASK_OK after forcefully changing the affinity of an interrupt. Cc: stable@vger.kernel.org Reported-by: NLeif Lindholm <leif.lindholm@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When performing a kexec on an SMP system, the secondary cores are stopped by calling machine_shutdown(), which in turn issues IPIs to offline the other CPUs. Unfortunately, this isn't enough to reboot the cores into a new kernel (since they are just executing a cpu_relax loop somewhere in memory) so we make use of platform_cpu_kill, part of the CPU hotplug implementation, to place the cores somewhere safe. This function expects to be called on the killing CPU for each core that it takes out. This patch moves the platform_cpu_kill callback out of the IPI handler and into smp_send_stop, therefore ensuring that it executes on the killing CPU rather than on the victim, matching what the hotplug code requires. Cc: stable@vger.kernel.org Reported-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
TPIDRURW is a user read/write register forming part of the group of thread registers in more recent versions of the ARM architecture (~v6+). Currently, the kernel does not touch this register, which allows tasks to communicate covertly by reading and writing to the register without context-switching affecting its contents. This patch clears TPIDRURW when TPIDRURO is updated via the set_tls macro, which is called directly from __switch_to. Since the current behaviour makes the register useless to userspace as far as thread pointers are concerned, simply clearing the register (rather than saving and restoring it) will not cause any problems to userspace. Cc: stable@vger.kernel.org Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
WARNING: vmlinux.o(.text+0x111b8): Section mismatch in reference from the function arm_memory_present() to the function .init.text:memory_present() The function arm_memory_present() references the function __init memory_present(). This is often because arm_memory_present lacks a __init annotation or the annotation of memory_present is wrong. WARNING: arch/arm/mm/built-in.o(.text+0x1edc): Section mismatch in reference from the function alloc_init_pud() to the function .init.text:alloc_init_section() The function alloc_init_pud() references the function __init alloc_init_section(). This is often because alloc_init_pud lacks a __init annotation or the annotation of alloc_init_section is wrong. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 David Vrabel 提交于
In xen_restore_fl_direct(), xen_force_evtchn_callback() was being called even if no events were pending. This resulted in (depending on workload) about a 100 times as many xen_version hypercalls as necessary. Fix this by correcting the sense of the conditional jump. This seems to give a significant performance benefit for some workloads. There is some subtle tricksy "..since the check here is trying to check both pending and masked in a single cmpw, but I think this is correct. It will call check_events now only when the combined mask+pending word is 0x0001 (aka unmasked, pending)." (Ian) CC: stable@kernel.org Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 27 4月, 2012 5 次提交
-
-
由 Marc Zyngier 提交于
All mainline platforms using the ARM architected timers are DT only. As such, remove the ad-hoc support that is not longer needed anymore. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NPawel Moll <pawel.moll@arm.com>
-
由 Marc Zyngier 提交于
If CONFIG_LOCAL_TIMERS is not defined, let the architected timer driver register a single clock_event_device that is used as a global timer. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Add runtime DT support and documentation for the Cortex A7/A15 architected timers. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Provide an A15 sched_clock implementation using the virtual counter, which is thought to be more useful than the physical one in a virtualised environment, as it can offset the time spent in another VM or the hypervisor. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-