- 24 3月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
Since this macro is identical to pgprot_writecombine() and is only used in a single place, remove it completely to avoid confusion. On ARMv7+ processors, the coherent DMA mapping must be Normal NonCacheable (a.k.a. writecombine) to avoid mismatched hardware attribute aliases (with the kernel linear mapping as Normal Cacheable). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 3月, 2014 1 次提交
-
-
由 Christopher Covington 提交于
Without this, the following scenario is incorrectly determined to be invalid. addr 0x7f_ffffe000 size 8192 addr_limit 0x80_00000000 This behavior was observed while trying to vmsplice the stack as part of a CRIU dump of a process on a system started with the norandmaps kernel parameter. Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 3月, 2014 3 次提交
-
-
由 Steve Capper 提交于
Rather than have separate hugetlb and transparent huge page pmd manipulation functions, re-wire our thp functions to simply call the pte equivalents. This allows THP to take advantage of the new PTE_WRITE logic introduced in: c2c93e5b arm64: mm: Introduce PTE_WRITE To represent splitting THPs we use the PTE_SPECIAL bit as this is not used for pmds. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
asm-generic offers an atomic-add based rwsem implementation, which can avoid the need for heavier, spinlock-based synchronisation on the fast path. This patch makes use of the optimised implementation for arm64 CPUs. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This enables support for the generic CPU feature modalias implementation that wires up optional CPU features to udev based module autoprobing. A file <asm/cpufeature.h> is provided that maps CPU feature numbers to elf_hwcap bits, which is the standard way on arm64 to advertise optional CPU features both internally and to user space. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [catalin.marinas@arm.com: removed unnecessary "!!"] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 3月, 2014 5 次提交
-
-
由 Jean Pihet 提交于
Add support for unwinding using the dwarf information in compat mode. Using the correct user stack pointer allows perf to record the frames correctly in the native and compat modes. Note that although the dwarf frame unwinding works ok using libunwind in native mode (on ARMv7 & ARMv8), some changes are required to the libunwind code for the compat mode. Those changes are posted separately on the libunwind mailing list. Tested on ARMv8 platform with v8 and compat v7 binaries, the latter are statically built. Signed-off-by: NJean Pihet <jean.pihet@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jean Pihet 提交于
This patch implements the functions required for the perf registers API, allowing the perf tool to interface kernel register dumps with libunwind in order to provide userspace backtracing. Compat mode is also supported. Only the general purpose user space registers are exported, i.e.: PERF_REG_ARM_X0, ... PERF_REG_ARM_X28, PERF_REG_ARM_FP, PERF_REG_ARM_LR, PERF_REG_ARM_SP, PERF_REG_ARM_PC and not the PERF_REG_ARM_V* registers. Signed-off-by: NJean Pihet <jean.pihet@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Radha Mohan Chintakuntla 提交于
ARMv8 supports a range of physical address bit sizes. The PARange bits from ID_AA64MMFR0_EL1 register are read during boot-time and the intermediate physical address size bits are written in the translation control registers (TCR_EL1 and VTCR_EL2). There is no change in the VA bits and levels of translation. Signed-off-by: NRadha Mohan Chintakuntla <rchintakuntla@cavium.com> Reviewed-by: NWill Deacon <Will.deacon@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Special pte mappings are not intended to be executable and do not even have an associated struct page. This patch ensures that we do not call __sync_icache_dcache() on such ptes. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NSteve Capper <Steve.Capper@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Tested-by: NBharat Bhushan <Bharat.Bhushan@freescale.com> Cc: <stable@vger.kernel.org>
-
由 Catalin Marinas 提交于
pgprot_{dmacoherent,writecombine,noncached} don't need to generate executable mappings with side-effects like __sync_icache_dcache() being called when the mapping is in user space. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NBharat Bhushan <Bharat.Bhushan@freescale.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Tested-by: NBharat Bhushan <Bharat.Bhushan@freescale.com> Cc: <stable@vger.kernel.org>
-
- 10 3月, 2014 1 次提交
-
-
由 Will Deacon 提交于
Commit 8adbf57f ("irqchip: gic: use dmb ishst instead of dsb when raising a softirq") added an explicit dmb(...) call to the GIC driver. This patch adds a simple dmb() macro to arm64, which expands to a DMB SY instruction. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 3月, 2014 3 次提交
-
-
由 Mark Brown 提交于
Add basic CPU topology support to arm64, based on the existing pre-v8 code and some work done by Mark Hambleton. This patch does not implement any topology discovery support since that should be based on information from firmware, it merely implements the scaffolding for integration of topology support in the architecture. No locking of the topology data is done since it is only modified during CPU bringup with external serialisation from the SMP code. The goal is to separate the architecture hookup for providing topology information from the DT parsing in order to ease review and avoid blocking the architecture code (which will be built on by other work) with the DT code review by providing something simple and basic. Following patches will implement support for interpreting topology information from MPIDR and for parsing the DT topology bindings for ARM, similar patches will be needed for ACPI. Signed-off-by: NMark Brown <broonie@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: removed CONFIG_CPU_TOPOLOGY, always on if SMP] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This adds support for advertising the presence of ARMv8 Crypto Extensions in the Aarch32 execution state to 32-bit ELF binaries running in 32-bit compat mode under the arm64 kernel. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Add support for the ELF auxv entry AT_HWCAP2 when running 32-bit ELF binaries in compat mode. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 2月, 2014 2 次提交
-
-
由 Vladimir Murzin 提交于
psci_init() is written to return err code if something goes wrong. However, the single user, setup_arch(), doesn't care about it. Moreover, every error path is supplied with a clear message which is enough for pleasant debugging. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch adds support for DMA API cache maintenance on SoCs without hardware device cache coherency. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 2月, 2014 5 次提交
-
-
由 Catalin Marinas 提交于
Over the past couple of years, the generic mmu_gather gained range tracking - 597e1c35 (mm/mmu_gather: enable tlb flush range in generic mmu_gather), 2b047252 (Fix TLB gather virtual address range invalidation corner cases) - and tlb_fast_mode() has been removed - 29eb7782 (arch, mm: Remove tlb_fast_mode()). The new mmu_gather structure is now suitable for arm64 and this patch converts the arch asm/tlb.h to the generic code. One functional difference is the shift_arg_pages() case where previously the code was flushing the full mm (no tlb_start_vma call) but now it flushes the range given to tlb_gather_mmu() (possibly slightly more efficient previously). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org>
-
由 Catalin Marinas 提交于
The patch moves the PCI I/O space (currently at 64K) before the earlyprintk mapping and extends it to 16MB. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vijaya Kumar K 提交于
typecast instruction_pointer macro to unsigned long to resolve following compiler warnings like warning: format '%lx' expects argument of type 'long unsigned int', but argument 2 has type 'u64' [-Wformat] Signed-off-by: NVijaya Kumar K <Vijaya.Kumar@caviumnetworks.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vijaya Kumar K 提交于
Add KGDB debug support for kernel debugging. With this patch, basic KGDB debugging is possible.GDB register layout is updated and GDB tool can establish connection with target and can set/clear breakpoints. Signed-off-by: NVijaya Kumar K <Vijaya.Kumar@caviumnetworks.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vijaya Kumar K 提交于
Add macros to enable and disable to manage PSTATE.D for debugging. The macros local_dbg_save and local_dbg_restore are moved to irqflags.h file KGDB boot tests fail because of PSTATE.D is masked. unmask it for debugging support Signed-off-by: NVijaya Kumar K <Vijaya.Kumar@caviumnetworks.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 2月, 2014 2 次提交
-
-
由 Will Deacon 提交于
cbnz/tbnz don't update the condition flags, so remove the "cc" clobbers from inline asm blocks that only use these instructions to implement conditional branches. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Linux requires a number of atomic operations to provide full barrier semantics, that is no memory accesses after the operation can be observed before any accesses up to and including the operation in program order. On arm64, these operations have been incorrectly implemented as follows: // A, B, C are independent memory locations <Access [A]> // atomic_op (B) 1: ldaxr x0, [B] // Exclusive load with acquire <op(B)> stlxr w1, x0, [B] // Exclusive store with release cbnz w1, 1b <Access [C]> The assumption here being that two half barriers are equivalent to a full barrier, so the only permitted ordering would be A -> B -> C (where B is the atomic operation involving both a load and a store). Unfortunately, this is not the case by the letter of the architecture and, in fact, the accesses to A and C are permitted to pass their nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the store-release on B). This is a clear violation of the full barrier requirement. The simple way to fix this is to implement the same algorithm as ARMv7 using explicit barriers: <Access [A]> // atomic_op (B) dmb ish // Full barrier 1: ldxr x0, [B] // Exclusive load <op(B)> stxr w1, x0, [B] // Exclusive store cbnz w1, 1b dmb ish // Full barrier <Access [C]> but this has the undesirable effect of introducing *two* full barrier instructions. A better approach is actually the following, non-intuitive sequence: <Access [A]> // atomic_op (B) 1: ldxr x0, [B] // Exclusive load <op(B)> stlxr w1, x0, [B] // Exclusive store with release cbnz w1, 1b dmb ish // Full barrier <Access [C]> The simple observations here are: - The dmb ensures that no subsequent accesses (e.g. the access to C) can enter or pass the atomic sequence. - The dmb also ensures that no prior accesses (e.g. the access to A) can pass the atomic sequence. - Therefore, no prior access can pass a subsequent access, or vice-versa (i.e. A is strictly ordered before C). - The stlxr ensures that no prior access can pass the store component of the atomic operation. The only tricky part remaining is the ordering between the ldxr and the access to A, since the absence of the first dmb means that we're now permitting re-ordering between the ldxr and any prior accesses. From an (arbitrary) observer's point of view, there are two scenarios: 1. We have observed the ldxr. This means that if we perform a store to [B], the ldxr will still return older data. If we can observe the ldxr, then we can potentially observe the permitted re-ordering with the access to A, which is clearly an issue when compared to the dmb variant of the code. Thankfully, the exclusive monitor will save us here since it will be cleared as a result of the store and the ldxr will retry. Notice that any use of a later memory observation to imply observation of the ldxr will also imply observation of the access to A, since the stlxr/dmb ensure strict ordering. 2. We have not observed the ldxr. This means we can perform a store and influence the later ldxr. However, that doesn't actually tell us anything about the access to [A], so we've not lost anything here either when compared to the dmb variant. This patch implements this solution for our barriered atomic operations, ensuring that we satisfy the full barrier requirements where they are needed. Cc: <stable@vger.kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 2月, 2014 1 次提交
-
-
由 Will Deacon 提交于
The dsb instruction takes an option specifying both the target access types and shareability domain. This patch allows such an option to be passed to the dsb macro, resulting in potentially more efficient code. Currently the option is ignored until all callers are updated (unlike ARM, the option is mandated by the assembler). Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 2月, 2014 3 次提交
-
-
由 Catalin Marinas 提交于
This patch enables sys_compat, sys_finit_module, sys_sched_setattr and sys_sched_getattr for compat (AArch32) applications. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Somehow SERROR has acquired an additional 'R' in a couple of headers. This patch removes them before they spread further. As neither instance is in use yet, no other sites need to be fixed up. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vinayak Kale 提交于
Add DSB after icache flush to complete the cache maintenance operation. The function __flush_icache_all() is used only for user space mappings and an ISB is not required because of an exception return before executing user instructions. An exception return would behave like an ISB. Signed-off-by: NVinayak Kale <vkale@apm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 31 1月, 2014 2 次提交
-
-
由 Steve Capper 提交于
We have the following means for encoding writable or dirty ptes: PTE_DIRTY PTE_RDONLY !pte_dirty && !pte_write 0 1 !pte_dirty && pte_write 0 1 pte_dirty && !pte_write 1 1 pte_dirty && pte_write 1 0 So we can't distinguish between writable clean ptes and read only ptes. This can cause problems with ptes being incorrectly flagged as read only when they are writable but not dirty. This patch introduces a new software bit PTE_WRITE which allows us to correctly identify writable ptes. PTE_RDONLY is now only clear for valid ptes where a page is both writable and dirty. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
Expand out the pte manipulation functions. This makes our life easier when using things like tags and cscope. Signed-off-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 1月, 2014 1 次提交
-
-
由 Pankaj Dubey 提交于
arm64/include/asm/dma-contiguous.h is trying to include <asm-genric/dma-contiguous.h> which does not exist, and thus failing build for arm64 if we enable CONFIG_DMA_CMA. This patch fixes build error by removing unwanted header inclusion from arm64's dma-contiguous.h. Signed-off-by: NPankaj Dubey <pankaj.dubey@samsung.com> Signed-off-by: NSomraj Mani <somraj.mani@samsung.com> Acked-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 1月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit 2f7dc602. The above commit breaks the mapping type for Device memory because pgprot_default already contains a Normal memory type. pgprot_default is also not initialised early enough for earlyprintk resulting in an inconsistent memory mapping with 64K PAGE_SIZE configuration. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NWill Deacon <will.deacon@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 12 1月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
A number of situations currently require the heavyweight smp_mb(), even though there is no need to order prior stores against later loads. Many architectures have much cheaper ways to handle these situations, but the Linux kernel currently has no portable way to make use of them. This commit therefore supplies smp_load_acquire() and smp_store_release() to remedy this situation. The new smp_load_acquire() primitive orders the specified load against any subsequent reads or writes, while the new smp_store_release() primitive orders the specifed store against any prior reads or writes. These primitives allow array-based circular FIFOs to be implemented without an smp_mb(), and also allow a theoretical hole in rcu_assign_pointer() to be closed at no additional expense on most architectures. In addition, the RCU experience transitioning from explicit smp_read_barrier_depends() and smp_wmb() to rcu_dereference() and rcu_assign_pointer(), respectively resulted in substantial improvements in readability. It therefore seems likely that replacing other explicit barriers with smp_load_acquire() and smp_store_release() will provide similar benefits. It appears that roughly half of the explicit barriers in core kernel code might be so replaced. [Changelog by PaulMck] Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 1月, 2014 5 次提交
-
-
由 Jiang Liu 提交于
Optimize jump label implementation for ARM64 by dynamically patching kernel text. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJiang Liu <liuj97@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jiang Liu 提交于
Introduce aarch64_insn_gen_{nop|branch_imm}() helper functions, which will be used to implement jump label on ARM64. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJiang Liu <liuj97@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jiang Liu 提交于
Function encode_insn_immediate() will be used by other instruction manipulate related functions, so move it into insn.c and rename it as aarch64_insn_encode_immediate(). Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJiang Liu <liuj97@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jiang Liu 提交于
Introduce three interfaces to patch kernel and module code: aarch64_insn_patch_text_nosync(): patch code without synchronization, it's caller's responsibility to synchronize all CPUs if needed. aarch64_insn_patch_text_sync(): patch code and always synchronize with stop_machine() aarch64_insn_patch_text(): patch code and synchronize with stop_machine() if needed Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJiang Liu <liuj97@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jiang Liu 提交于
Introduce basic aarch64 instruction decoding helper aarch64_get_insn_class() and aarch64_insn_hotpatch_safe(). Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJiang Liu <liuj97@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 12月, 2013 1 次提交
-
-
由 Anup Patel 提交于
Current max VCPUs per-Guest is set to 4 which is preventing us from creating a Guest (or VM) with 8 VCPUs on Host (e.g. X-Gene Storm SOC) with 8 Host CPUs. The correct value of max VCPUs per-Guest should be same as the max CPUs supported by GICv2 which is 8 but, increasing value of max VCPUs per-Guest can make things slower hence we add Kconfig option to let KVM users select appropriate max VCPUs per-Guest. Signed-off-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NPranavkumar Sawargaonkar <pranavkumar@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 20 12月, 2013 2 次提交
-
-
由 Laura Abbott 提交于
arm64 bit targets need the features CMA provides. Add the appropriate hooks, header files, and Kconfig to allow this to happen. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
asm/cputype.h contains a bunch of #defines for CPU id registers that essentially map to themselves. Remove the #defines and pass the tokens directly to the inline asm() that reads the registers. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-