- 04 7月, 2014 1 次提交
-
-
由 Steve Capper 提交于
The define ARM64_64K_PAGES is tested for rather than CONFIG_ARM64_64K_PAGES. Correct that typo here. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 6月, 2014 3 次提交
-
-
由 Will Deacon 提交于
This should be a plain old '&' and could easily lead to undefined behaviour if the target of a pmd_mknotpresent invocation was the same as the parameter. Fixes: 9c7e535f (arm64: mm: Route pmd thp functions through pte equivalents) Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # v3.15 Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suravee Suthikulpanit 提交于
Arm64 does not define dma_get_required_mask() function. Therefore, it should not define the ARCH_HAS_DMA_GET_REQUIRED_MASK. This causes build errors in some device drivers (e.g. mpt2sas) Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org>
-
由 Will Deacon 提交于
Whilst native arm64 applications don't have the 16-bit UID/GID syscalls wired up, compat tasks can still access them. The 16-bit wrappers for these syscalls use __kernel_old_uid_t and __kernel_old_gid_t, which must be 16-bit data types to maintain compatibility with the 16-bit UIDs used by compat applications. This patch defines 16-bit __kernel_old_{gid,uid}_t types for arm64 instead of using the 32-bit types provided by asm-generic. Signed-off-by: NWill Deacon <will.deacon@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 5月, 2014 6 次提交
-
-
由 Will Deacon 提交于
Commit 9c7e535f ("arm64: mm: Route pmd thp functions through pte equivalents") changed the pmd manipulator and accessor functions to convert the target pmd to a pte, process it with the pte functions, then convert it back. Along the way, we gained support for PTE_WRITE, however this is completely ignored by set_pmd_at, and so we fail to set the PMD_SECT_RDONLY for PMDs, resulting in all sorts of lovely failures (like CoW not working). Partially reverting the offending commit (by making use of PMD_SECT_RDONLY explicitly for pmd_{write,wrprotect,mkwrite} functions) leads to further issues because pmd_write can then return potentially incorrect values for page table entries marked as RDONLY, leading to BUG_ON(pmd_write(entry)) tripping under some THP workloads. This patch fixes the issue by routing set_pmd_at through set_pte_at, which correctly takes the PTE_WRITE flag into account. Given that THP mappings are always anonymous, the additional cache-flushing code in __sync_icache_dcache won't impose any significant overhead as the flush will be skipped. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NSteve Capper <steve.capper@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows system call entry or exit to be traced as ftrace events, ie. sys_enter_*/sys_exit_*, if CONFIG_FTRACE_SYSCALLS is enabled. Those events appear and can be controlled under ${sysfs}/tracing/events/syscalls/ Please note that we can't trace compat system calls here because AArch32 mode does not share the same syscall table with AArch64. Just define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS in order to avoid unexpected results (bogus syscalls reported or even hang-up). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
CALLER_ADDRx returns caller's address at specified level in call stacks. They are used for several tracers like irqsoff and preemptoff. Strange to say, however, they are refered even without FTRACE. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows "dynamic ftrace" if CONFIG_DYNAMIC_FTRACE is enabled. Here we can turn on and off tracing dynamically per-function base. On arm64, this is done by patching single branch instruction to _mcount() inserted by gcc -pg option. The branch is replaced to NOP initially at kernel start up, and later on, NOP to branch to ftrace_caller() when enabled or branch to NOP when disabled. Please note that ftrace_caller() is a counterpart of _mcount() in case of 'static' ftrace. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch implements arm64 specific part to support function tracers, such as function (CONFIG_FUNCTION_TRACER), function_graph (CONFIG_FUNCTION_GRAPH_TRACER) and function profiler (CONFIG_FUNCTION_PROFILER). With 'function' tracer, all the functions in the kernel are traced with timestamps in ${sysfs}/tracing/trace. If function_graph tracer is specified, call graph is generated. The kernel must be compiled with -pg option so that _mcount() is inserted at the beginning of functions. This function is called on every function's entry as long as tracing is enabled. In addition, function_graph tracer also needs to be able to probe function's exit. ftrace_graph_caller() & return_to_handler do this by faking link register's value to intercept function's return path. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Reviewed-by: NGanapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
Since insn.h is indirectly included in asm/entry-ftrace.S, we need to exclude some declarations by __ASSEMBLY__. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 5月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
In order to allow KVM to run on Cortex-A53 implementations, wire the minimal support required. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 23 5月, 2014 3 次提交
-
-
由 zhichang.yuan 提交于
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strlen() and strnlen() functions. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: NDeepak Saxena <dsaxena@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 zhichang.yuan 提交于
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strcmp() and strncmp() functions. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: NDeepak Saxena <dsaxena@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 zhichang.yuan 提交于
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized memcmp() function. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: NDeepak Saxena <dsaxena@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 5月, 2014 1 次提交
-
-
由 Peter Zijlstra 提交于
The only idle method for arm64 is WFI and it therefore unconditionally requires the reschedule interrupt when idle. Suggested-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Link: http://lkml.kernel.org/r/20140509170649.GG13658@twins.programming.kicks-ass.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 17 5月, 2014 2 次提交
-
-
由 Larry Bassel 提交于
Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525f Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Reviewed-by: NKevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NLarry Bassel <larry.bassel@linaro.org> Reviewed-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Zi Shen Lim 提交于
Remove unused and deprecated mc_capable() and smt_capable(). Both were added recently by f6e763b9 ("arm64: topology: Implement basic CPU topology support"). Uses of both were removed by 8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs"). Signed-off-by: NZi Shen Lim <zlim@broadcom.com> Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 5月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit bc07c2c6. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 5月, 2014 3 次提交
-
-
由 Ashwin Chaugule 提交于
PSCIv0.2 adds a new function called AFFINITY_INFO, which can be used to query if a specified CPU has actually gone offline. Calling this function via cpu_kill ensures that a CPU has quiesced after a call to cpu_die. This helps prevent the CPU from doing arbitrary bad things when data or instructions are clobbered (as happens with kexec) in the window between a CPU announcing that it is dead and said CPU leaving the kernel. Signed-off-by: NAshwin Chaugule <ashwin.chaugule@linaro.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NRob Herring <robh@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ashwin Chaugule 提交于
The PSCIv0.2 spec defines standard values of function IDs and introduces a few new functions. Detect version of PSCI and appropriately select the right PSCI functions. Signed-off-by: NAshwin Chaugule <ashwin.chaugule@linaro.org> Reviewed-by: NRob Herring <robh@kernel.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 12 5月, 2014 5 次提交
-
-
由 AKASHI Takahiro 提交于
Some kernel files may include both linux/compat.h and asm/compat.h directly or indirectly. Since both header files contain is_compat_task() under !CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail. Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c (do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is implemented). So this patch proactively 1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h 2) replaces asm/compat.h to linux/compat.h in kernel/*.c, but asm/compat.h is still necessary in ptrace.c and process.c because they use is_compat_thread(). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
This macro, regs_return_value, is used mainly for audit to record system call's results, but may also be used in test_kprobes.c. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NRichard Guy Briggs <rgb@redhat.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
Currently syscall_trace() is called only for ptrace. With additional TIF_xx flags defined, it is now called in all the cases of audit, ftrace and seccomp in addition to ptrace. Acked-by: NRichard Guy Briggs <rgb@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Since mdscr_el1 is part of the debug register group, it is highly likely to be trapped by a hypervisor to prevent virtual machines from debugging (buggering?) each other. Unfortunately, this absolutely destroys our performance, since we access the register on many of our low-level fault handling paths to keep track of the various debug state machines. This patch removes our dependency on mdscr_el1 in the case that debugging is not being used. More specifically we: - Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and avoid disabling step in the MDSCR when we don't need to. MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from userspace. - Ensure debug exceptions are re-enabled on *all* exception entry paths, even the debug exception handling path (where we re-enable exceptions after invoking the handler). Since we can now rely on MDSCR_EL1.SS being cleared by the entry code, exception handlers can usually enable debug immediately before enabling interrupts. - Remove all debug exception unmasking from ret_to_user and el1_preempt, since we will never get here with debug exceptions masked. This results in a slight change to kernel debug behaviour, where we now step into interrupt handlers and data aborts from EL1 when debugging the kernel, which is actually a useful thing to do. A side-effect of this is that it *does* potentially prevent stepping off {break,watch}points when there is a high-frequency interrupt source (e.g. a timer), so a debugger would need to use either breakpoints or manually disable interrupts to get around this issue. With this patch applied, guest performance is restored under KVM when debug register accesses are trapped (and we get a measurable performance increase on the host on Cortex-A57 too). Cc: Ian Campbell <ian.campbell@citrix.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Stefano Stabellini 提交于
virt_to_pfn has been defined in arch/arm/include/asm/memory.h by commit e26a9e00 "ARM: Better virt_to_page() handling" and Xen has come to rely on it. Introduce virt_to_pfn on arm64 too. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2014 5 次提交
-
-
由 Will Deacon 提交于
The recently introduced acquire/release accessors refer to smp_mb() in the !CONFIG_SMP case. This is confusing when reading the code, so use barrier() directly when we know we're UP. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Now that all callers of the barrier macros are updated to pass the mandatory options, update the macros so the option is actually used. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
The tlb maintainence functions: __cpu_flush_user_tlb_range and __cpu_flush_kern_tlb_range do not take into consideration the page granule when looping through the address range, and repeatedly flush tlb entries for the same page when operating with 64K pages. This patch re-works the logic s.t. we instead advance the loop by 1 << (PAGE_SHIFT - 12), so avoid repeating ourselves. Also the routines have been converted from assembler to static inline functions to aid with legibility and potential compiler optimisations. The isb() has been removed from flush_tlb_kernel_range(.) as it is only needed when changing the execute permission of a mapping. If one needs to set an area of the kernel as execute/non-execute an isb() must be inserted after the call to flush_tlb_kernel_range. Cc: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Some users of xchg() don't bother using the return value, which results in a compiler warning like the following (from kgdb): In file included from linux/arch/arm64/include/asm/atomic.h:27:0, from include/linux/atomic.h:4, from include/linux/spinlock.h:402, from include/linux/seqlock.h:35, from include/linux/time.h:5, from include/uapi/linux/timex.h:56, from include/linux/timex.h:56, from include/linux/sched.h:19, from include/linux/pid_namespace.h:4, from kernel/debug/debug_core.c:30: kernel/debug/debug_core.c: In function ‘kgdb_cpu_enter’: linux/arch/arm64/include/asm/cmpxchg.h:75:3: warning: value computed is not used [-Wunused-value] ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) ^ linux/arch/arm64/include/asm/atomic.h:132:30: note: in expansion of macro ‘xchg’ #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) kernel/debug/debug_core.c:504:4: note: in expansion of macro ‘atomic_xchg’ atomic_xchg(&kgdb_active, cpu); ^ This patch makes use of the same trick as we do for cmpxchg, by assigning the return value to a dummy variable in the xchg() macro itself. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 5月, 2014 7 次提交
-
-
由 Steve Capper 提交于
We have the capability to map 1GB level 1 blocks when using a 4K granule. This patch adjusts the create_mapping logic s.t. when mapping physical memory on boot, we attempt to use a 1GB block if both the VA and PA start and end are 1GB aligned. This both reduces the levels of lookup required to resolve a kernel logical address, as well as reduces TLB pressure on cores that support 1GB TLB entries. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Tested-by: NJungseok Lee <jays.lee@samsung.com> [catalin.marinas@arm.com: s/prot_sect_kernel/PROT_SECT_NORMAL_EXEC/] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Bjorn Helgaas 提交于
arm64 sets CONFIG_64BIT=y and hence uses the "long counter" atomic64_t definition from include/linux/types.h. Make atomic64_read() return "long", not "long long". Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. The kernel, however, can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must implement/enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_valid_ng() macro. VM_READ is also checked now for page faults. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch removes the aux_context structure (and the containing file) to allow the placement of the _aarch64_ctx end magic based on the context stored on the signal stack. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault was caused by a write access and applications like Qemu rely on such information being provided in sigcontext. This patch introduces the ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly in compat sigcontext. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The hardware provides the maximum cache line size in the system via the CTR_EL0.CWG bits. This patch implements the cache_line_size() function to read such information, together with a sanity check if the statically defined L1_CACHE_BYTES is smaller than the hardware value. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 08 5月, 2014 2 次提交
-
-
由 Stefano Stabellini 提交于
virt_to_pfn has been defined in arch/arm/include/asm/memory.h by commit e26a9e00 "ARM: Better virt_to_page() handling" and Xen has come to rely on it. Introduce virt_to_pfn on arm64 too. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This patch modifies kernel_neon_begin() and kernel_neon_end(), so they may be called from any context. To address the case where only a couple of registers are needed, kernel_neon_begin_partial(u32) is introduced which takes as a parameter the number of bottom 'n' NEON q-registers required. To mark the end of such a partial section, the regular kernel_neon_end() should be used. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-