- 04 4月, 2014 1 次提交
-
-
由 Russell King 提交于
virt_to_page() is incredibly inefficient when virt-to-phys patching is enabled. This is because we end up with this calculation: page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12] in assembly. The asm virt_to_phys() is equivalent this this operation: addr - PAGE_OFFSET + __pv_phys_offset and we can see that because this is assembly, the compiler has no chance to optimise some of that away. This should reduce down to: page = &mem_map[(addr - PAGE_OFFSET) >> 12] for the common cases. Permit the compiler to make this optimisation by giving it more of the information it needs - do this by providing a virt_to_pfn() macro. Another issue which makes this more complex is that __pv_phys_offset is a 64-bit type on all platforms. This is needlessly wasteful - if we store the physical offset as a PFN, we can save a lot of work having to deal with 64-bit values, which sometimes ends up producing incredibly horrid code: a4c: e3009000 movw r9, #0 a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset a50: R_ARM_MOVT_ABS __pv_phys_offset a54: e3002000 movw r2, #0 a54: R_ARM_MOVW_ABS_NC __pv_phys_offset a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset a58: R_ARM_MOVT_ABS __pv_phys_offset a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset a60: e3001000 movw r1, #0 a60: R_ARM_MOVW_ABS_NC mem_map a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 3月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
The grant mapping API does m2p_override unnecessarily: only gntdev needs it, for blkback and future netback patches it just cause a lock contention, as those pages never go to userspace. Therefore this series does the following: - the bulk of the original function (everything after the mapping hypercall) is moved to arch-dependent set/clear_foreign_p2m_mapping - the "if (xen_feature(XENFEAT_auto_translated_physmap))" branch goes to ARM - therefore the ARM function could be much smaller, the m2p_override stubs could be also removed - on x86 the set_phys_to_machine calls were moved up to this new funcion from m2p_override functions - and m2p_override functions are only called when there is a kmap_ops param It also removes a stray space from arch/x86/include/asm/xen/page.h. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Suggested-by: NAnthony Liguori <aliguori@amazon.com> Suggested-by: NDavid Vrabel <david.vrabel@citrix.com> Suggested-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 12 3月, 2014 1 次提交
-
-
由 Michael Opdenacker 提交于
This patch removes the use of the IRQF_DISABLED flag in arch/arm/include/asm/floppy.h It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: NMichael Opdenacker <michael.opdenacker@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 3月, 2014 1 次提交
-
-
由 Bjorn Helgaas 提交于
Remove mc_capable() and smt_capable(). Neither is used. Both were added by 5c45bf27 ("sched: mc/smt power savings sched policy"). Uses of both were removed by 8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs"). Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Link: http://lkml.kernel.org/r/20140304210737.16893.54289.stgit@bhelgaas-glaptop.roam.corp.google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 3月, 2014 1 次提交
-
-
由 Russell King 提交于
With noMMU, CONFIG_PAGE_OFFSET was not being set correctly. As there's no MMU, PAGE_OFFSET should be equal to PHYS_OFFSET in all cases. This commit makes that explicit. Since we do this, we don't need to mess around in asm/memory.h with ifdefs to sort this out, so let's get rid of that, and there's no point offering the "Memory split" option for noMMU as that's meaningless there. Fixes: b9b32bf7 ("ARM: use linker magic for vectors and vector stubs") Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 2月, 2014 6 次提交
-
-
由 Ard Biesheuvel 提交于
This enables AT_HWCAP2 for ARM. The generic support for this new ELF auxv entry was added in commit 2171364d (powerpc: Add HWCAP2 aux entry) Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Looking at perf profiles of multi-threaded hackbench runs, a significant performance hit appears to manifest from the cmpxchg loop used to implement the 32-bit atomic_add_unless function. This can be mitigated by writing a direct implementation of __atomic_add_unless which doesn't require iteration outside of the atomic operation. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Victor Kamensky 提交于
Renames logical shift macros, 'push' and 'pull', defined in arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'. That eliminates name conflict between 'push' logical shift macro and 'push' instruction mnemonic. That allows assembler.h to be included in .S files that use 'push' instruction. Suggested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 David Howells 提交于
Delete ARM's asm/system.h. It's the last holdout and should be got rid of. This builds for defconfig, lpc32xx_defconfig, exynos_defconfig + XEN, the previous changed to a Gemini system and an omap3 config with TI_DAVINCI_EMAC. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The pte_accessible macro can be used to identify page table entries capable of being cached by a TLB. In principle, this differs from pte_present, since PROT_NONE mappings are mapped using invalid entries identified as present and ptes designated as `old' can use either invalid entries or those with the access flag cleared (guaranteed not to be in the TLB). However, there is a race to take care of, as described in 20841405 ("mm: fix TLB flush race between migration, and change_protection_range"), between a page being migrated and mprotected at the same time. In this case, we can check whether a TLB invalidation is pending for the mm and if so, temporarily consider PROT_NONE mappings as valid. This patch implements a quick pte_accessible macro for ARM by simply checking if the pte is valid/present depending on the mm. For classic MMU, these checks are identical and will generate some false positives for PROT_NONE mappings, but this is better than the current asm-generic definition of ((void)(pte),1). Finally, pte_present_user is moved to use pte_valid (and renamed appropriately) since we don't care about cache flushing for faulting mappings. Acked-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
After a bunch of benchmarking on the interaction between dmb and pldw, it turns out that issuing the pldw *after* the dmb instruction can give modest performance gains (~3% atomic_add_return improvement on a dual A15). This patch adds prefetchw invocations to our barriered atomic operations including cmpxchg, test_and_xxx and futexes. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 2月, 2014 1 次提交
-
-
由 Vinayak Kale 提交于
Add DSB after icache flush to complete the cache maintenance operation. Signed-off-by: NVinayak Kale <vkale@apm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 2月, 2014 8 次提交
-
-
由 Will Deacon 提交于
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Now that we select HAVE_EFFICIENT_UNALIGNED_ACCESS for ARMv6+ CPUs, replace the __LINUX_ARM_ARCH__ check in uaccess.h with the new symbol. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Christopher Covington 提交于
Add the trivial support necessary to get hardware breakpoints working for GDB on ARMv8 simulators running in AArch32 mode. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jonathan Austin 提交于
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does. Note that as the ACTLR cannot (usually) be written from non-secure, it is the responsibility of the bootloader/firmware to set this bit per core - it is done here in Linux as last resort in case of bad firmware. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
When unlocking a spinlock, we require the following, strictly ordered sequence of events: <barrier> /* dmb */ <unlock> <barrier> /* dsb */ <sev> Whilst the code does indeed reflect this in terms of the architecture, the final <barrier> + <sev> have been contracted into a single inline asm without a "memory" clobber, therefore the compiler is at liberty to reorder the unlock to the end of the above sequence. In such a case, a waiting CPU may be woken up before the lock has been unlocked, leading to extremely poor performance. This patch reworks the dsb_sev() function to make use of the dsb() macro and ensure ordering against the unlock. Cc: <stable@vger.kernel.org> Reported-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Christoffer Dall 提交于
The stage-2 memory attributes are distinct from the Hyp memory attributes and the Stage-1 memory attributes. We were using the stage-1 memory attributes for stage-2 mappings causing device mappings to be mapped as normal memory. Add the S2 equivalent defines for memory attributes and fix the comments explaining the defines while at it. Add a prot_pte_s2 field to the mem_type struct and fill out the field for device mappings accordingly. Cc: <stable@vger.kernel.org> [3.9+] Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Tim Chen 提交于
This patch allows each architecture to add its specific assembly optimized arch_mcs_spin_lock_contended and arch_mcs_spinlock_uncontended for MCS lock and unlock functions. Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: AswinChandramouleeswaran <aswin@hp.com> Cc: George Spelvin <linux@horizon.com> Cc: Rik vanRiel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: MichelLespinasse <walken@google.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Alex Shi <alex.shi@linaro.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "Figo.zhang" <figo1802@gmail.com> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Waiman Long <waiman.long@hp.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Will Deacon <will.deacon@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1390347382.3138.67.camel@schen9-DESKSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Tim Chen 提交于
We perform a clean up of the Kbuid files in each architecture. We order the files in each Kbuild in alphabetical order by running the below script. for i in arch/*/include/asm/Kbuild do cat $i | gawk '/^generic-y/ { i = 3; do { for (; i <= NF; i++) { if ($i == "\\") { getline; i = 1; continue; } if ($i != "") hdr[$i] = $i; } break; } while (1); next; } // { print $0; } END { n = asort(hdr); for (i = 1; i <= n; i++) print "generic-y += " hdr[i]; }' > ${i}.sorted; mv ${i}.sorted $i; done Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com> Cc: AswinChandramouleeswaran <aswin@hp.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com> Cc: Scott J Norton <scott.norton@hp.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "Figo.zhang" <figo1802@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rik van Riel <riel@redhat.com> Cc: Waiman Long <waiman.long@hp.com> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Alex Shi <alex.shi@linaro.org> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: George Spelvin <linux@horizon.com> Cc: MichelLespinasse <walken@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> [ Fixed build bug. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 28 1月, 2014 1 次提交
-
-
由 Ezequiel Garcia 提交于
Some SoC have MMIO regions that are shared across orthogonal subsystems. This commit implements a possible solution for the thread-safe access of such regions through a spinlock-protected API. Concurrent access is protected with a single spinlock for the entire MMIO address space. While this protects shared-registers, it also serializes access to unrelated/unshared registers. We add relaxed and non-relaxed variants, by using writel_relaxed and writel, respectively. The rationale for this is that some users may not require register write completion but only thread-safe access to a register. Tested-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 1月, 2014 1 次提交
-
-
由 Santosh Shilimkar 提交于
Introduce memblock memory allocation APIs which allow to support PAE or LPAE extension on 32 bits archs where the physical memory start address can be beyond 4GB. In such cases, existing bootmem APIs which operate on 32 bit addresses won't work and needs memblock layer which operates on 64 bit addresses. So we add equivalent APIs so that we can replace usage of bootmem with memblock interfaces. Architectures already converted to NO_BOOTMEM use these new memblock interfaces. The architectures which are still not converted to NO_BOOTMEM continue to function as is because we still maintain the fal lback option of bootmem back-end supporting these new interfaces. So no functional change as such. In long run, once all the architectures moves to NO_BOOTMEM, we can get rid of bootmem layer completely. This is one step to remove the core code dependency with bootmem and also gives path for architectures to move away from bootmem. The proposed interface will became active if both CONFIG_HAVE_MEMBLOCK and CONFIG_NO_BOOTMEM are specified by arch. In case !CONFIG_NO_BOOTMEM, the memblock() wrappers will fallback to the existing bootmem apis so that arch's not converted to NO_BOOTMEM continue to work as is. The meaning of MEMBLOCK_ALLOC_ACCESSIBLE and MEMBLOCK_ALLOC_ANYWHERE is kept same. [akpm@linux-foundation.org: s/depricated/deprecated/] Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Paul Walmsley <paul@pwsan.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 1月, 2014 1 次提交
-
-
由 Russell King 提交于
ARMs ffs/fls implementations are not type compatible with x86, so when they're used in combination with min()/max(), they provoke warnings. Change these to be inline functions with the correct types, providing the clz as a separate documentation, and document their individual behaviours. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 1月, 2014 1 次提交
-
-
由 Dario Faggioli 提交于
Add the syscalls needed for supporting scheduling algorithms with extended scheduling parameters (e.g., SCHED_DEADLINE). In general, it makes possible to specify a periodic/sporadic task, that executes for a given amount of runtime at each instance, and is scheduled according to the urgency of their own timing constraints, i.e.: - a (maximum/typical) instance execution time, - a minimum interval between consecutive instances, - a time constraint by which each instance must be completed. Thus, both the data structure that holds the scheduling parameters of the tasks and the system calls dealing with it must be extended. Unfortunately, modifying the existing struct sched_param would break the ABI and result in potentially serious compatibility issues with legacy binaries. For these reasons, this patch: - defines the new struct sched_attr, containing all the fields that are necessary for specifying a task in the computational model described above; - defines and implements the new scheduling related syscalls that manipulate it, i.e., sched_setattr() and sched_getattr(). Syscalls are introduced for x86 (32 and 64 bits) and ARM only, as a proof of concept and for developing and testing purposes. Making them available on other architectures is straightforward. Since no "user" for these new parameters is introduced in this patch, the implementation of the new system calls is just identical to their already existing counterpart. Future patches that implement scheduling policies able to exploit the new data structure must also take care of modifying the sched_*attr() calls accordingly with their own purposes. Signed-off-by: NDario Faggioli <raistlin@linux.it> [ Rewrote to use sched_attr. ] Signed-off-by: NJuri Lelli <juri.lelli@gmail.com> [ Removed sched_setscheduler2() for now. ] Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1383831828-15501-3-git-send-email-juri.lelli@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 12 1月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
A number of situations currently require the heavyweight smp_mb(), even though there is no need to order prior stores against later loads. Many architectures have much cheaper ways to handle these situations, but the Linux kernel currently has no portable way to make use of them. This commit therefore supplies smp_load_acquire() and smp_store_release() to remedy this situation. The new smp_load_acquire() primitive orders the specified load against any subsequent reads or writes, while the new smp_store_release() primitive orders the specifed store against any prior reads or writes. These primitives allow array-based circular FIFOs to be implemented without an smp_mb(), and also allow a theoretical hole in rcu_assign_pointer() to be closed at no additional expense on most architectures. In addition, the RCU experience transitioning from explicit smp_read_barrier_depends() and smp_wmb() to rcu_dereference() and rcu_assign_pointer(), respectively resulted in substantial improvements in readability. It therefore seems likely that replacing other explicit barriers with smp_load_acquire() and smp_store_release() will provide similar benefits. It appears that roughly half of the explicit barriers in core kernel code might be so replaced. [Changelog by PaulMck] Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 1月, 2014 2 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
The 'xen_hvm_resume_frames' used to be an 'unsigned long' and contain the virtual address of the grants. That was OK for most architectures (PVHVM, ARM) were the grants are contiguous in memory. That however is not the case for PVH - in which case we will have to do a lookup for each virtual address for the PFN. Instead of doing that, lets make it a structure which will contain the array of PFNs, the virtual address and the count of said PFNs. Also provide a generic functions: gnttab_setup_auto_xlat_frames and gnttab_free_auto_xlat_frames to populate said structure with appropriate values for PVHVM and ARM. To round it off, change the name from 'xen_hvm_resume_frames' to a more descriptive one - 'xen_auto_xlat_grant_frames'. For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver" we will populate the 'xen_auto_xlat_grant_frames' by ourselves. v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames and also introduces xen_unmap for gnttab_free_auto_xlat_frames. Suggested-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v3: Based on top of 'asm/xen/page.h: remove redundant semicolon'] Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Wei Liu 提交于
Signed-off-by: NWei Liu <liuw@liuw.name> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 05 1月, 2014 1 次提交
-
-
由 Rob Herring 提交于
ioremap_cache is more aligned with other architectures. There are only 2 users of this in the kernel: pxa2xx-flash and Xen. This fixes Xen build failures on arm64: drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 12月, 2013 6 次提交
-
-
由 Will Deacon 提交于
With commit 11ec50ca ("word-at-a-time: provide generic big-endian zero_bytemask implementation"), the asm-generic word-at-a-time code now provides a zero_bytemask implementation, allowing us to make use of DCACHE_WORD_ACCESS on big-endian CPUs, providing our load_unaligned_zeropad function is endianness-clean. This patch reworks the load_unaligned_zeropad fixup code to work for both big- and little-endian CPUs, then removes the !CPU_BIG_ENDIAN check when selecting DCACHE_WORD_ACCESS. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Laura Abbott 提交于
The definition of virt_addr_valid is that virt_addr_valid should return true if and only if virt_to_page returns a valid pointer. The current definition of virt_addr_valid only checks against the virtual address range. There's no guarantee that just because a virtual address falls bewteen PAGE_OFFSET and high_memory the associated physical memory has a valid backing struct page. Follow the example of other architectures and convert to pfn_valid to verify that the virtual address is actually valid. The check for an address between PAGE_OFFSET and high_memory is still necessary as vmalloc/highmem addresses are not valid with virt_to_page. Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The IDE code used to specify the IDE IRQs for chipsets operating in legacy mode. This appears to no longer work, and this information must be provided by the arch. Do so. This partially fixes CY82C693 (and probably others) on Footbridge platforms. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Sebastian Hesselbarth 提交于
This adds support for the Marvell Tauros3 cache controller which is compatible with pl310 cache controller but broadcasts L1 cache operations to L2 cache. While updating the binding documentation, clean up the list of possible compatibles. Also reorder driver compatibles to allow non-ARM derivated to be compatible to ARM cache controller compatibles. Signed-off-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
There is a miscompilation of csum_tcpudp_magic() due to the way we pass the asm() operands in. Fortunately, this doesn't affect the IP code, but can affect anyone who passes ntohs(udp->len) as the length argument, or protocols with more than 8 bits. The problem stems from passing 16-bit operands into an asm() - GCC makes no guarantees about what may be in the high 16-bits of such a register passed into assembly which is in the "HI" machine mode. Address this by changing the way we handle the 16-bit arguments - since accumulating the protocol and length can never overflow, we can delegate this to the compiler to perform, and then accumulate it into the checksum inside the asm(), taking account of the endian-ness via an appropriate 32-bit rotation. While we are here, also realise that there's a chance to optimise this a little: several callers from IP code pass a constant zero as the initial sum. This is wasteful - if we detect this condition, we can optimise away one instruction. Tested-by: NMaxime Bizon <mbizon@freebox.fr> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rob Herring 提交于
ioremap_cache is more aligned with other architectures. There are only 2 users of this in the kernel: pxa2xx-flash and Xen. This fixes Xen build failures on arm64 caused by commit c04e8e2f (arm64: allow ioremap_cache() to use existing RAM mappings) drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 12月, 2013 1 次提交
-
-
由 Andre Przywara 提交于
For migration to work we need to save (and later restore) the state of each core's virtual generic timer. Since this is per VCPU, we can use the [gs]et_one_reg ioctl and export the three needed registers (control, counter, compare value). Though they live in cp15 space, we don't use the existing list, since they need special accessor functions and the arch timer is optional. Acked-by: NMarc Zynger <marc.zyngier@arm.com> Signed-off-by: NAndre Przywara <andre.przywara@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 18 12月, 2013 1 次提交
-
-
由 David S. Miller 提交于
Reported-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 12月, 2013 2 次提交
-
-
由 Russell King 提交于
Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is not defined: In file included from arch/arm/include/asm/page.h:163:0, from include/linux/mm_types.h:16, from include/linux/sched.h:24, from arch/arm/kernel/asm-offsets.c:13: arch/arm/include/asm/memory.h: In function '__virt_to_phys': arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function) arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in arch/arm/include/asm/memory.h: In function '__phys_to_virt': arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function) Fixes: ca5a45c0 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions") Tested-By: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Alexandre Courbot 提交于
Trusted Foundations is a TrustZone-based secure monitor for ARM that can be invoked using the same SMC-based API on supported platforms. This patch adds initial basic support for Trusted Foundations using the ARM firmware API. Current features are limited to the ability to boot secondary processors. Note: The API followed by Trusted Foundations does *not* follow the SMC calling conventions. It has nothing to do with PSCI neither and is only relevant to devices that use Trusted Foundations (like most Tegra-based retail devices). Signed-off-by: NAlexandre Courbot <acourbot@nvidia.com> Reviewed-by: NTomasz Figa <t.figa@samsung.com> Reviewed-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NStephen Warren <swarren@nvidia.com>
-
- 12 12月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
KVM initialisation fails on architectures implementing virt_to_idmap() because virt_to_phys() on such architectures won't fetch you the correct idmap page. So update the KVM ARM code to use the virt_to_idmap() to fix the issue. Since the KVM code is shared between arm and arm64, we create kvm_virt_to_phys() and handle the redirection in respective headers. Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 11 12月, 2013 1 次提交
-
-
由 Laura Abbott 提交于
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-