- 10 7月, 2014 5 次提交
-
-
由 AKASHI Takahiro 提交于
On AArch64, audit is supported through generic lib/audit.c and compat_audit.c, and so this patch adds arch specific definitions required. Acked-by Will Deacon <will.deacon@arm.com> Acked-by: NRichard Guy Briggs <rgb@redhat.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
This patch adds auditing functions on entry to or exit from every system call invocation. Acked-by: NRichard Guy Briggs <rgb@redhat.com> Acked-by Will Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch adds __NR_* definitions to asm/unistd32.h, moves the __NR_compat_* definitions to asm/unistd.h and removes all the explicit unistd32.h includes apart from the one building the compat syscall table. The aim is to have the compat __NR_* definitions available but without colliding with the native syscall definitions (required by lib/compat_audit.c to avoid duplicating the audit header files between native and compat). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Larry Bassel 提交于
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths). These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series). The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq). The call to ct_user_enter is made at the beginning of the kernel_exit macro. This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin. Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NKevin Hilman <khilman@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NLarry Bassel <larry.bassel@linaro.org> Signed-off-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Larry Bassel 提交于
To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place. For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead). Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NKevin Hilman <khilman@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NLarry Bassel <larry.bassel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 7月, 2014 2 次提交
-
-
由 Laura Abbott 提交于
arm64 currently lacks support for -fstack-protector. Add similar functionality to arm to detect stack corruption. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Zi Shen Lim 提交于
Create cpu topology based on MPIDR. When hardware sets MPIDR to sane values, this method will always work. Therefore it should also work well as the fallback method. [1] When we have multiple processing elements in the system, we create the cpu topology by mapping each affinity level (from lowest to highest) to threads (if they exist), cores, and clusters. [1] http://www.spinics.net/lists/arm-kernel/msg317445.htmlAcked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NZi Shen Lim <zlim@broadcom.com> Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 04 7月, 2014 3 次提交
-
-
由 Marc Zyngier 提交于
The CurrentEL system register reports the Current Exception Level of the CPU. It doesn't say anything about the stack handling, and yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h. It works by chance because PSR_MODE_EL2t happens to match the right bits, but that's otherwise a very bad idea. Just check for the EL value instead. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
The __sync_icache_dcache routine will only flush the dcache for the first page of a compound page, potentially leading to stale icache data residing further on in a hugetlb page. This patch addresses this issue by taking into consideration the order of the page when flushing the dcache. Reported-by: NMark Brown <broonie@linaro.org> Tested-by: NMark Brown <broonie@linaro.org> Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # v3.11+
-
由 Steve Capper 提交于
The define ARM64_64K_PAGES is tested for rather than CONFIG_ARM64_64K_PAGES. Correct that typo here. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 6月, 2014 15 次提交
-
-
由 Will Deacon 提交于
This should be a plain old '&' and could easily lead to undefined behaviour if the target of a pmd_mknotpresent invocation was the same as the parameter. Fixes: 9c7e535f (arm64: mm: Route pmd thp functions through pte equivalents) Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # v3.15 Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Salter 提交于
I'm seeing this build failure for arm64: CC [M] Documentation/filesystems/configfs/configfs_example_macros.o In file included from /usr/include/bits/sigcontext.h:27:0, from /usr/include/signal.h:340, from /usr/include/sys/wait.h:30, from Documentation/accounting/getdelays.c:24: .../linux/usr/include/asm/sigcontext.h:61:2: error: unknown type name ‘u64’ u64 esr; ^ make[2]: *** [Documentation/accounting/getdelays] Error 1 This was introduced by commit 15af1942: arm64: Expose ESR_EL1 information to user when SIGSEGV/SIGBUS Using __u64 instead of u64 fixes the problem. Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vinayak Kale 提交于
APM X-Gene Storm SoC supports 4 serial ports. This patch adds device nodes for serial ports 1 to 3 (a device node for serial port 0 is already present in the dts file). This patch also sets the compatible property of serial nodes to "ns16550a". Signed-off-by: NVinayak Kale <vkale@apm.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suravee Suthikulpanit 提交于
Arm64 does not define dma_get_required_mask() function. Therefore, it should not define the ARCH_HAS_DMA_GET_REQUIRED_MASK. This causes build errors in some device drivers (e.g. mpt2sas) Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org>
-
由 Victor Kamensky 提交于
Currently core file of aarch32 process prstatus note has empty registers set. As result aarch32 core files create by V8 kernel are not very useful. It happens because compat_gpr_get and compat_gpr_set functions can copy registers values to/from either kbuf or ubuf. ELF core file collection function fill_thread_core_info calls compat_gpr_get with kbuf set and ubuf set to 0. But current compat_gpr_get and compat_gpr_set function handle copy to/from only ubuf case. Fix is to handle kbuf and ubuf as two separate cases in similar way as other functions like user_regset_copyout, user_regset_copyin do. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Whilst native arm64 applications don't have the 16-bit UID/GID syscalls wired up, compat tasks can still access them. The 16-bit wrappers for these syscalls use __kernel_old_uid_t and __kernel_old_gid_t, which must be 16-bit data types to maintain compatibility with the 16-bit UIDs used by compat applications. This patch defines 16-bit __kernel_old_{gid,uid}_t types for arm64 instead of using the 32-bit types provided by asm-generic. Signed-off-by: NWill Deacon <will.deacon@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Our compat PTRACE_POKEUSR implementation simply passes the user data to regset_copy_from_user after some simple range checking. Unfortunately, the data in question has already been copied to the kernel stack by this point, so the subsequent access_ok check fails and the ptrace request returns -EFAULT. This causes problems tracing fork() with older versions of strace. This patch briefly changes the fs to KERNEL_DS, so that the access_ok check passes even with a kernel address. Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
When the CMA buffer is allocated, it is too early to know whether devices will require ZONE_DMA memory. This patch limits the CMA buffer to (DMA_BIT_MASK(32) + 1) if CONFIG_ZONE_DMA is enabled. In addition, it computes the dma_to_phys(DMA_BIT_MASK(32)) before the increment (no current functional change). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This patches modifies the GHASH secure hash implementation to switch to a faster, polynomial multiplication based reduction instead of one that uses shifts and rotates. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This fixes a bug in the GHASH algorithm resulting in the calculated hash to be incorrect if the input is presented in chunks whose size is not a multiple of 16 bytes. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Fixes: fdd23894 ("arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions") Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch adds several defconfig options required primarily by the LTP test suite. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Paul Bolle 提交于
Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Sudeep Holla 提交于
The Operating Performance Point (OPP) Layer library is a generic library used by CPUFREQ and DEVFREQ. It can be enabled only on the platforms that specify ARCH_HAS_OPP option. This patch selects that option in order to allow ARM64 based platforms to use OPP library. Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Sudeep Holla 提交于
Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 ChiaHao 提交于
The value of ESR has been stored into x1, and should be directly pass to do_sp_pc_abort function, "MOV x1, x25" is an extra operation and do_sp_pc_abort will get the wrong value of ESR. Signed-off-by: NChiaHao <andy.jhshiu@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org>
-
- 07 6月, 2014 1 次提交
-
-
由 Loc Ho 提交于
This patch adds APM X-Gene SoC RTC DTS entry Signed-off-by: NRameshwar Prasad Sahu <rsahu@apm.com> Signed-off-by: NLoc Ho <lho@apm.com> Cc: Jon Masters <jcm@redhat.com> Cc: Alessandro Zummo <a.zummo@towertech.it> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 6月, 2014 1 次提交
-
-
由 Naoya Horiguchi 提交于
Currently hugepage migration is available for all archs which support pmd-level hugepage, but testing is done only for x86_64 and there're bugs for other archs. So to avoid breaking such archs, this patch limits the availability strictly to x86_64 until developers of other archs get interested in enabling this feature. Simply disabling hugepage migration on non-x86_64 archs is not enough to fix the reported problem where sys_move_pages() hits the BUG_ON() in follow_page(FOLL_GET), so let's fix this by checking if hugepage migration is supported in vma_migratable(). Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Tested-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NHugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 5月, 2014 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
On platforms implementing CPU power management, the CPUidle subsystem can allow CPUs to enter idle states where local timers logic is lost on power down. To keep the software timers functional the kernel relies on an always-on broadcast timer to be present in the platform to relay the interrupt signalling the timer expiries. For platforms implementing CPU core gating that do not implement an always-on HW timer or implement it in a broken way, this patch adds code to initialize the kernel hrtimer based clock event device upon boot (which can be chosen as tick broadcast device by the kernel). It relies on a dynamically chosen CPU to be always powered-up. This CPU then relays the timer interrupt to CPUs in deep-idle states through its HW local timer device. Having a CPU always-on has implications on power management platform capabilities and makes CPUidle suboptimal, since at least a CPU is kept always in a shallow idle state by the kernel to relay timer interrupts, but at least leaves the kernel with a functional system with some working power management capabilities. The hrtimer based clock event device is unconditionally registered, but has the lowest possible rating such that any broadcast-capable HW clock event device present will be chosen in preference as the tick broadcast device. Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 5月, 2014 8 次提交
-
-
由 Will Deacon 提交于
Commit 9c7e535f ("arm64: mm: Route pmd thp functions through pte equivalents") changed the pmd manipulator and accessor functions to convert the target pmd to a pte, process it with the pte functions, then convert it back. Along the way, we gained support for PTE_WRITE, however this is completely ignored by set_pmd_at, and so we fail to set the PMD_SECT_RDONLY for PMDs, resulting in all sorts of lovely failures (like CoW not working). Partially reverting the offending commit (by making use of PMD_SECT_RDONLY explicitly for pmd_{write,wrprotect,mkwrite} functions) leads to further issues because pmd_write can then return potentially incorrect values for page table entries marked as RDONLY, leading to BUG_ON(pmd_write(entry)) tripping under some THP workloads. This patch fixes the issue by routing set_pmd_at through set_pte_at, which correctly takes the PTE_WRITE flag into account. Given that THP mappings are always anonymous, the additional cache-flushing code in __sync_icache_dcache won't impose any significant overhead as the flush will be skipped. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NSteve Capper <steve.capper@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows system call entry or exit to be traced as ftrace events, ie. sys_enter_*/sys_exit_*, if CONFIG_FTRACE_SYSCALLS is enabled. Those events appear and can be controlled under ${sysfs}/tracing/events/syscalls/ Please note that we can't trace compat system calls here because AArch32 mode does not share the same syscall table with AArch64. Just define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS in order to avoid unexpected results (bogus syscalls reported or even hang-up). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
CALLER_ADDRx returns caller's address at specified level in call stacks. They are used for several tracers like irqsoff and preemptoff. Strange to say, however, they are refered even without FTRACE. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch allows "dynamic ftrace" if CONFIG_DYNAMIC_FTRACE is enabled. Here we can turn on and off tracing dynamically per-function base. On arm64, this is done by patching single branch instruction to _mcount() inserted by gcc -pg option. The branch is replaced to NOP initially at kernel start up, and later on, NOP to branch to ftrace_caller() when enabled or branch to NOP when disabled. Please note that ftrace_caller() is a counterpart of _mcount() in case of 'static' ftrace. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This patch implements arm64 specific part to support function tracers, such as function (CONFIG_FUNCTION_TRACER), function_graph (CONFIG_FUNCTION_GRAPH_TRACER) and function profiler (CONFIG_FUNCTION_PROFILER). With 'function' tracer, all the functions in the kernel are traced with timestamps in ${sysfs}/tracing/trace. If function_graph tracer is specified, call graph is generated. The kernel must be compiled with -pg option so that _mcount() is inserted at the beginning of functions. This function is called on every function's entry as long as tracing is enabled. In addition, function_graph tracer also needs to be able to probe function's exit. ftrace_graph_caller() & return_to_handler do this by faking link register's value to intercept function's return path. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Reviewed-by: NGanapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
Recordmcount utility under scripts is run, after compiling each object, to find out all the locations of calling _mcount() and put them into specific seciton named __mcount_loc. Then linker collects all such information into a table in the kernel image (between __start_mcount_loc and __stop_mcount_loc) for later use by ftrace. This patch adds arm64 specific definitions to identify such locations. There are two types of implementation, C and Perl. On arm64, only C version is used to build the kernel now that CONFIG_HAVE_C_RECORDMCOUNT is on. But Perl version is also maintained. This patch also contains a workaround just in case where a header file, elf.h, on host machine doesn't have definitions of EM_AARCH64 nor R_AARCH64_ABS64. Without them, compiling C version of recordmcount will fail. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
walk_stackframe() calls unwind_frame(), and if walk_stackframe() is "notrace", unwind_frame() should be also "notrace". Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
Since insn.h is indirectly included in asm/entry-ftrace.S, we need to exclude some declarations by __ASSEMBLY__. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 5月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
In order to allow KVM to run on Cortex-A53 implementations, wire the minimal support required. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 23 5月, 2014 3 次提交
-
-
由 Geoff Levand 提交于
Change the arm64 linker script ENTRY() command to define _text as the kernel entry point. The arm64 boot protocol specifies that the kernel must be entered at the beginning of the kernel image. The existing ENTRY() command defined the symbol stext as the entry point, which emitted an incorrect entry point, but would not cause a runtime error because the existing entry code immediately jumps to stext. Signed-off-by: NGeoff Levand <geoff@infradead.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Leif Lindholm 提交于
Booting a kernel with CONFIG_EFI enabled on a non-EFI system caused an oops with the current UEFI support code. Add the required test to prevent this. Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
由 zhichang.yuan 提交于
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strlen() and strnlen() functions. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: NDeepak Saxena <dsaxena@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-