“296034f7de8bdf111984ce1630ac598a9c94a253”上不存在“README.md”
- 16 1月, 2019 2 次提交
-
-
由 Andrey Konovalov 提交于
Defining ARCH_SLAB_MINALIGN in arch/arm64/include/asm/cache.h when KASAN is off is not needed, as it is defined in defined in include/linux/slab.h as ifndef. Signed-off-by: NAndrey Konovalov <andreyknvl@google.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
Since commit b89d82ef ("arm64: kpti: Avoid rewriting early page tables when KASLR is enabled"), a kernel built with CONFIG_RANDOMIZE_BASE can decide early whether to use non-global mappings by checking the kaslr_offset(). A kernel built without CONFIG_RANDOMIZE_BASE, instead checks the cpufeature static-key. This leaves a gap where CONFIG_RANDOMIZE_BASE was enabled, no kaslr seed was provided, but kpti was forced on using the cmdline option. When the decision is made late, kpti_install_ng_mappings() will re-write the page tables, but arm64_kernel_use_ng_mappings()'s value does not change as it only tests the cpufeature static-key if CONFIG_RANDOMIZE_BASE is disabled. This function influences PROT_DEFAULT via PTE_MAYBE_NG, and causes pgattr_change_is_safe() to catch nG->G transitions when the unchanged PROT_DEFAULT is used as part of PAGE_KERNEL_RO: [ 1.942255] alternatives: patching kernel code [ 1.998288] ------------[ cut here ]------------ [ 2.000693] kernel BUG at arch/arm64/mm/mmu.c:165! [ 2.019215] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 2.020257] Modules linked in: [ 2.020807] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc2 #51 [ 2.021917] Hardware name: linux,dummy-virt (DT) [ 2.022790] pstate: 40000005 (nZcv daif -PAN -UAO) [ 2.023742] pc : __create_pgd_mapping+0x508/0x6d0 [ 2.024671] lr : __create_pgd_mapping+0x500/0x6d0 [ 2.058059] Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____)) [ 2.059369] Call trace: [ 2.059845] __create_pgd_mapping+0x508/0x6d0 [ 2.060684] update_mapping_prot+0x48/0xd0 [ 2.061477] mark_linear_text_alias_ro+0xdc/0xe4 [ 2.070502] smp_cpus_done+0x90/0x98 [ 2.071216] smp_init+0x100/0x114 [ 2.071878] kernel_init_freeable+0xd4/0x220 [ 2.072750] kernel_init+0x10/0x100 [ 2.073455] ret_from_fork+0x10/0x18 [ 2.075414] ---[ end trace 3572f3a7782292de ]--- [ 2.076389] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b If arm64_kernel_unmapped_at_el0() is true, arm64_kernel_use_ng_mappings() should also be true. Signed-off-by: NJames Morse <james.morse@arm.com> CC: Ard Biesheuvel <ard.biesheuvel@linaro.org> CC: John Garry <john.garry@huawei.com> CC: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 1月, 2019 1 次提交
-
-
由 Will Deacon 提交于
A side effect of commit c55191e9 ("arm64: mm: apply r/o permissions of VM areas to its linear alias as well") is that the linear map is created with page granularity, which means that transitioning the early page table from global to non-global mappings when enabling kpti can take a significant amount of time during boot. Given that most CPU implementations do not require kpti, this mainly impacts KASLR builds where kpti is forcefully enabled. However, in these situations we know early on that non-global mappings are required and can avoid the use of global mappings from the beginning. The only gotcha is Cavium erratum #27456, which we must detect based on the MIDR value of the boot CPU. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 10 1月, 2019 1 次提交
-
-
由 Will Deacon 提交于
Some of the right letters, not necessarily in the right order: CONFIG_MODEVERIONS -> CONFIG_MODVERSIONS Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 1月, 2019 1 次提交
-
-
由 Andrey Konovalov 提交于
Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro. Link: http://lkml.kernel.org/r/52ddd881916bcc153a9924c154daacde78522227.1546540962.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Suggested-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 1月, 2019 2 次提交
-
-
由 Masahiro Yamada 提交于
Now that Kbuild automatically creates asm-generic wrappers for missing mandatory headers, it is redundant to list the same headers in generic-y and mandatory-y. Suggested-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NSam Ravnborg <sam@ravnborg.org>
-
由 Masahiro Yamada 提交于
These comments are leftovers of commit fcc8487d ("uapi: export all headers under uapi directories"). Prior to that commit, exported headers must be explicitly added to header-y. Now, all headers under the uapi/ directories are exported. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 05 1月, 2019 1 次提交
-
-
由 Joel Fernandes (Google) 提交于
Patch series "Add support for fast mremap". This series speeds up the mremap(2) syscall by copying page tables at the PMD level even for non-THP systems. There is concern that the extra 'address' argument that mremap passes to pte_alloc may do something subtle architecture related in the future that may make the scheme not work. Also we find that there is no point in passing the 'address' to pte_alloc since its unused. This patch therefore removes this argument tree-wide resulting in a nice negative diff as well. Also ensuring along the way that the enabled architectures do not do anything funky with the 'address' argument that goes unnoticed by the optimization. Build and boot tested on x86-64. Build tested on arm64. The config enablement patch for arm64 will be posted in the future after more testing. The changes were obtained by applying the following Coccinelle script. (thanks Julia for answering all Coccinelle questions!). Following fix ups were done manually: * Removal of address argument from pte_fragment_alloc * Removal of pte_alloc_one_fast definitions from m68k and microblaze. // Options: --include-headers --no-includes // Note: I split the 'identifier fn' line, so if you are manually // running it, please unsplit it so it runs for you. virtual patch @pte_alloc_func_def depends on patch exists@ identifier E2; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; type T2; @@ fn(... - , T2 E2 ) { ... } @pte_alloc_func_proto_noarg depends on patch exists@ type T1, T2, T3, T4; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; @@ ( - T3 fn(T1, T2); + T3 fn(T1); | - T3 fn(T1, T2, T4); + T3 fn(T1, T2); ) @pte_alloc_func_proto depends on patch exists@ identifier E1, E2, E4; type T1, T2, T3, T4; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; @@ ( - T3 fn(T1 E1, T2 E2); + T3 fn(T1 E1); | - T3 fn(T1 E1, T2 E2, T4 E4); + T3 fn(T1 E1, T2 E2); ) @pte_alloc_func_call depends on patch exists@ expression E2; identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; @@ fn(... -, E2 ) @pte_alloc_macro depends on patch exists@ identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$"; identifier a, b, c; expression e; position p; @@ ( - #define fn(a, b, c) e + #define fn(a, b) e | - #define fn(a, b) e + #define fn(a) e ) Link: http://lkml.kernel.org/r/20181108181201.88826-2-joelaf@google.comSigned-off-by: NJoel Fernandes (Google) <joel@joelfernandes.org> Suggested-by: NKirill A. Shutemov <kirill@shutemov.name> Acked-by: NKirill A. Shutemov <kirill@shutemov.name> Cc: Michal Hocko <mhocko@kernel.org> Cc: Julia Lawall <Julia.Lawall@lip6.fr> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: William Kucharski <william.kucharski@oracle.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 1月, 2019 5 次提交
-
-
由 Will Deacon 提交于
Commit 73aeb2cb ("ARM: 8787/1: wire up io_pgetevents syscall") hooked up the io_pgetevents() system call for 32-bit ARM, so we can do the same for the compat wrapper on arm64. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The ARM Linux kernel handles the EABI syscall numbers as follows: 0 - NR_SYSCALLS-1 : Invoke syscall via syscall table NR_SYSCALLS - 0xeffff : -ENOSYS (to be allocated in future) 0xf0000 - 0xf07ff : Private syscall or -ENOSYS if not allocated > 0xf07ff : SIGILL Our compat code gets this wrong and ends up sending SIGILL in response to all syscalls greater than NR_SYSCALLS which have a value greater than 0x7ff in the bottom 16 bits. Fix this by defining the end of the ARM private syscall region and checking the syscall number against that directly. Update the comment while we're at it. Cc: <stable@vger.kernel.org> Cc: Dave Martin <Dave.Martin@arm.com> Reported-by: NPi-Hsun Shih <pihsun@chromium.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
Currently, <uapi/asm/sigcontext.h> provides common definitions for describing SVE context structures that are also used by the ptrace definitions in <uapi/asm/ptrace.h>. For this reason, a #include of <asm/sigcontext.h> was added in ptrace.h, but it this turns out that this can interact badly with userspace code that tries to include ptrace.h on top of the libc headers (which may provide their own shadow definitions for sigcontext.h). To make the headers easier for userspace to consume, this patch bounces the common definitions into an __SVE_* namespace and moves them to a backend header <uapi/asm/sve_context.h> that can be included by the other headers as appropriate. This should allow ptrace.h to be used alongside libc's sigcontext.h (if any) without ill effects. This should make the situation unambiguous: <asm/sigcontext.h> is the header to include for the sigframe-specific definitions, while <asm/ptrace.h> is the header to include for ptrace-specific definitions. To avoid conflicting with existing usage, <asm/sigcontext.h> remains the canonical way to get the common definitions for SVE_VQ_MIN, sve_vq_from_vl() etc., both in userspace and in the kernel: relying on these being defined as a side effect of including just <asm/ptrace.h> was never intended to be safe. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
SVE_PT_REGS_OFFSET is supposed to indicate the offset for skipping over the ptrace NT_ARM_SVE header (struct user_sve_header) to the start of the SVE register data proper. However, currently SVE_PT_REGS_OFFSET is defined in terms of struct sve_context, which is wrong: that structure describes the SVE header in the signal frame, not in the ptrace regset. This patch fixes the definition to use the ptrace header structure struct user_sve_header instead. By good fortune, the two structures are the same size anyway, so there is no functional or ABI change. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Linus Torvalds 提交于
Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 1月, 2019 1 次提交
-
-
由 Shaokun Zhang 提交于
For arm64: updates for 4.21, there is a compilation error: arch/arm64/kernel/head.S: Assembler messages: arch/arm64/kernel/head.S:824: Error: missing ')' arch/arm64/kernel/head.S:824: Error: missing ')' arch/arm64/kernel/head.S:824: Error: missing ')' arch/arm64/kernel/head.S:824: Error: unexpected characters following instruction at operand 2 -- `mov x2,#(2)|(2U<<(8))' scripts/Makefile.build:391: recipe for target 'arch/arm64/kernel/head.o' failed make[1]: *** [arch/arm64/kernel/head.o] Error 1 GCC version is gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609 Let's fix it using the UL() macro. Fixes: 66f16a24 ("arm64: smp: Rework early feature mismatched detection") Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Tested-by: NJohn Stultz <john.stultz@linaro.org> Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> [will: consistent use of UL() for all shifts in asm constants] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 12月, 2018 7 次提交
-
-
由 Andrey Konovalov 提交于
Tag-based KASAN doesn't check memory accesses through pointers tagged with 0xff. When page_address is used to get pointer to memory that corresponds to some page, the tag of the resulting pointer gets set to 0xff, even though the allocated memory might have been tagged differently. For slab pages it's impossible to recover the correct tag to return from page_address, since the page might contain multiple slab objects tagged with different values, and we can't know in advance which one of them is going to get accessed. For non slab pages however, we can recover the tag in page_address, since the whole page was marked with the same tag. This patch adds tagging to non slab memory allocated with pagealloc. To set the tag of the pointer returned from page_address, the tag gets stored to page->flags when the memory gets allocated. Link: http://lkml.kernel.org/r/d758ddcef46a5abc9970182b9137e2fbee202a2c.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Konovalov 提交于
Tag-based KASAN inline instrumentation mode (which embeds checks of shadow memory into the generated code, instead of inserting a callback) generates a brk instruction when a tag mismatch is detected. This commit adds a tag-based KASAN specific brk handler, that decodes the immediate value passed to the brk instructions (to extract information about the memory access that triggered the mismatch), reads the register values (x0 contains the guilty address) and reports the bug. Link: http://lkml.kernel.org/r/c91fe7684070e34dc34b419e6b69498f4dcacc2d.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Konovalov 提交于
Tag-based KASAN uses the Top Byte Ignore feature of arm64 CPUs to store a pointer tag in the top byte of each pointer. This commit enables the TCR_TBI1 bit, which enables Top Byte Ignore for the kernel, when tag-based KASAN is used. Link: http://lkml.kernel.org/r/f51eca084c8cdb2f3a55195fe342dc8953b7aead.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Konovalov 提交于
virt_addr_is_linear (which is used by virt_addr_valid) assumes that the top byte of the address is 0xff, which isn't always the case with tag-based KASAN. This patch resets the tag in this macro. Link: http://lkml.kernel.org/r/df73a37dd5ed37f4deaf77bc718e9f2e590e69b1.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Konovalov 提交于
This commit adds a few helper functions, that are meant to be used to work with tags embedded in the top byte of kernel pointers: to set, to get or to reset the top byte. Link: http://lkml.kernel.org/r/f6c6437bb8e143bc44f42c3c259c62e734be7935.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Konovalov 提交于
Move the untagged_addr() macro from arch/arm64/include/asm/uaccess.h to arch/arm64/include/asm/memory.h to be later reused by KASAN. Also make the untagged_addr() macro accept all kinds of address types (void *, unsigned long, etc.). This allows not to specify type casts in each place where the macro is used. This is done by using __typeof__. Link: http://lkml.kernel.org/r/2e9ef8d2ed594106eca514b268365b5419113f6a.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Konovalov 提交于
Tag-based KASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires 1/16th of the kernel virtual address space for the shadow memory. This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when the tag-based KASAN mode is enabled. Link: http://lkml.kernel.org/r/308b6bd49f756bb5e533be93c6f085ba99b30339.1544099024.git.andreyknvl@google.comSigned-off-by: NAndrey Konovalov <andreyknvl@google.com> Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: NDmitry Vyukov <dvyukov@google.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Christoph Lameter <cl@linux.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 12月, 2018 1 次提交
-
-
由 Lan Tianyu 提交于
The patch is to make kvm_set_spte_hva() return int and caller can check return value to determine flush tlb or not. Signed-off-by: NLan Tianyu <Tianyu.Lan@microsoft.com> Acked-by: NPaul Mackerras <paulus@ozlabs.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 20 12月, 2018 5 次提交
-
-
由 Marc Zyngier 提交于
32 and 64bit use different symbols to identify the traps. 32bit has a fine grained approach (prefetch abort, data abort and HVC), while 64bit is pretty happy with just "trap". This has been fine so far, except that we now need to decode some of that in tracepoints that are common to both architectures. Introduce ARM_EXCEPTION_IS_TRAP which abstracts the trap symbols and make the tracepoint use it. Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Will Deacon 提交于
Although bit 31 of VTCR_EL2 is RES1, we inadvertently end up setting all of the upper 32 bits to 1 as well because we define VTCR_EL2_RES1 as signed, which is sign-extended when assigning to kvm->arch.vtcr. Lucky for us, the architecture currently treats these upper bits as RES0 so, whilst we've been naughty, we haven't set fire to anything yet. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
vcpu_read_sys_reg should not be modifying the VCPU structure. Eventually, to handle EL2 sysregs for nested virtualization, we will call vcpu_read_sys_reg from places that have a const vcpu pointer, which will complain about the lack of the const modifier on the read path. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoffer Dall 提交于
The kvm_exit tracepoint strangely always reported exits as being IRQs. This seems to be because either the __print_symbolic or the tracepoint macros use a variable named idx. Take this chance to update the fields in the tracepoint to reflect the concepts in the arm64 architecture that we pass to the tracepoint and move the exception type table to the same location and header files as the exits code. We also clear out the exception code to 0 for IRQ exits (which translates to UNKNOWN in text) to make it slighyly less confusing to parse the trace output. Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Christoph Hellwig 提交于
Otherwise the direct mapping won't work at all given that a NULL dev->dma_ops causes a fallback. Note that we already explicitly set dev->dma_ops to dma_dummy_ops for dma-incapable devices, so this fallback should not be needed anyway. Fixes: 356da6d0 ("dma-mapping: bypass indirect calls for dma-direct") Signed-off-by: NChristoph Hellwig <hch@lst.de> Reported-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
-
- 18 12月, 2018 8 次提交
-
-
由 Christoffer Dall 提交于
In attempting to re-construct the logic for our stage 2 page table layout I found the reasoning in the comment explaining how we calculate the number of levels used for stage 2 page tables a bit backwards. This commit attempts to clarify the comment, to make it slightly easier to read without having the Arm ARM open on the right page. While we're at it, fixup a typo in a comment that was recently changed. Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Punit Agrawal 提交于
KVM only supports PMD hugepages at stage 2. Now that the various page handling routines are updated, extend the stage 2 fault handling to map in PUD hugepages. Addition of PUD hugepage support enables additional page sizes (e.g., 1G with 4K granule) which can be useful on cores that support mapping larger block sizes in the TLB entries. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replace BUG() => WARN_ON(1) for arm32 PUD helpers ] Signed-off-by: NSuzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Punit Agrawal 提交于
In preparation for creating larger hugepages at Stage 2, add support to the age handling notifiers for PUD hugepages when encountered. Provide trivial helpers for arm32 to allow sharing code. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON(1) for arm32 PUD helpers ] Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Punit Agrawal 提交于
In preparation for creating larger hugepages at Stage 2, extend the access fault handling at Stage 2 to support PUD hugepages when encountered. Provide trivial helpers for arm32 to allow sharing of code. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON(1) in PUD helpers ] Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Punit Agrawal 提交于
In preparation for creating PUD hugepages at stage 2, add support for detecting execute permissions on PUD page table entries. Faults due to lack of execute permissions on page table entries is used to perform i-cache invalidation on first execute. Provide trivial implementations of arm32 helpers to allow sharing of code. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON(1) in arm32 PUD helpers ] Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Punit Agrawal 提交于
In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON() in arm32 pud helpers ] Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Punit Agrawal 提交于
Introduce helpers to abstract architectural handling of the conversion of pfn to page table entries and marking a PMD page table entry as a block entry. The helpers are introduced in preparation for supporting PUD hugepages at stage 2 - which are supported on arm64 but do not exist on arm. Signed-off-by: NPunit Agrawal <punit.agrawal@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Mark Rutland 提交于
When we emulate a guest instruction, we don't advance the hardware singlestep state machine, and thus the guest will receive a software step exception after a next instruction which is not emulated by the host. We bodge around this in an ad-hoc fashion. Sometimes we explicitly check whether userspace requested a single step, and fake a debug exception from within the kernel. Other times, we advance the HW singlestep state rely on the HW to generate the exception for us. Thus, the observed step behaviour differs for host and guest. Let's make this simpler and consistent by always advancing the HW singlestep state machine when we skip an instruction. Thus we can rely on the hardware to generate the singlestep exception for us, and never need to explicitly check for an active-pending step, nor do we need to fake a debug exception from the guest. Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 15 12月, 2018 1 次提交
-
-
由 Logan Gunthorpe 提交于
This define is used by arm64 to calculate the size of the vmemmap region. It is defined as the log2 of the upper bound on the size of a struct page. We move it into mm_types.h so it can be defined properly instead of set and checked with a build bug. This also allows us to use the same define for riscv. Link: http://lkml.kernel.org/r/20181107205433.3875-2-logang@deltatee.comSigned-off-by: NLogan Gunthorpe <logang@deltatee.com> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 12月, 2018 4 次提交
-
-
由 Robin Murphy 提交于
The dummy DMA ops are currently used by arm64 for any device which has an invalid ACPI description and is thus barred from using DMA due to not knowing whether is is cache-coherent or not. Factor these out into general dma-mapping code so that they can be referenced from other common code paths. In the process, we can prune all the optional callbacks which just do the same thing as the default behaviour, and fill in .map_resource for completeness. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> [hch: moved to a separate source file] Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Will Deacon 提交于
Using shifts directly is error-prone and can cause inadvertent sign extensions or build problems with older versions of binutils. Consistent use of the _BITUL() macro makes these problems disappear. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Open-coding the pointer-auth HWCAPs is a mess and can be avoided by reusing the multi-cap logic from the CPU errata framework. Move the multi_entry_cap_matches code to cpufeature.h and reuse it for the pointer auth HWCAPs. Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
We can easily avoid defining the two meta-capabilities for the address and generic keys, so remove them and instead just check both of the architected and impdef capabilities when determining the level of system support. Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-