- 05 10月, 2015 1 次提交
-
-
由 Mark Salyzyn 提交于
This is the arm64 portion of commit 45cac65b ("readahead: fault retry breaks mmap file read random detection"), which was absent from the initial port and has since gone unnoticed. The original commit says: > .fault now can retry. The retry can break state machine of .fault. In > filemap_fault, if page is miss, ra->mmap_miss is increased. In the second > try, since the page is in page cache now, ra->mmap_miss is decreased. And > these are done in one fault, so we can't detect random mmap file access. > > Add a new flag to indicate .fault is tried once. In the second try, skip > ra->mmap_miss decreasing. The filemap_fault state machine is ok with it. With this change, Mark reports that: > Random read improves by 250%, sequential read improves by 40%, and > random write by 400% to an eMMC device with dm crypto wrapped around it. Cc: Shaohua Li <shli@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: NMark Salyzyn <salyzyn@android.com> Signed-off-by: NRiley Andrews <riandrews@android.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 2 次提交
-
-
由 Dave P Martin 提交于
Currently, the minimal default BUG() implementation from asm- generic is used for arm64. This patch uses the BRK software breakpoint instruction to generate a trap instead, similarly to most other arches, with the generic BUG code generating the dmesg boilerplate. This allows bug metadata to be moved to a separate table and reduces the amount of inline code at BUG and WARN sites. This also avoids clobbering any registers before they can be dumped. To mitigate the size of the bug table further, this patch makes use of the existing infrastructure for encoding addresses within the bug table as 32-bit offsets instead of absolute pointers. (Note that this limits the kernel size to 2GB.) Traps are registered at arch_initcall time for aarch64, but BUG has minimal real dependencies and it is desirable to be able to generate bug splats as early as possible. This patch redirects all debug exceptions caused by BRK directly to bug_handler() until the full debug exception support has been initialised. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 James Morse 提交于
'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0. This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access. This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJames Morse <james.morse@arm.com> [will: use ALTERNATIVE in asm and tidy up pan_enable check] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 7月, 2015 1 次提交
-
-
由 Suzuki K. Poulose 提交于
Commit 86dca36e introduced ratelimited usage for 'unhandled_signal' messages. The commit checks the ratelimit irrespective of whether the signal is handled or not, which is wrong and leads to false reports like the below in dmesg : __do_user_fault: 127 callbacks suppressed Do the ratelimit check only if the signal is unhandled. Fixes: 86dca36e ("arm64: use private ratelimit state along with show_unhandled_signals") Cc: Vladimir Murzin <Vladimir.Murzin@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 6月, 2015 2 次提交
-
-
由 Vladimir Murzin 提交于
printk_ratelimit() shares the ratelimiting state with other callers what may lead to scenarios where at the time we want to print out debug information we already limited, so nothing appears in the dmesg - this makes exception-trace quite poor helper in debugging. Additionally, we have imbalance with some messages limited with global ratelimit state and other messages limited with their private state defined via pr_*_ratelimited(). To address this inconsistency show_unhandled_signals_ratelimited() macro is introduced and caller sites are converted to use it. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vladimir Murzin 提交于
Report unhandled SP/PC alignment faults if the show_unhandled_signals variable is set (via /proc/sys/debug/exception-trace). Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 5月, 2015 1 次提交
-
-
由 David Hildenbrand 提交于
Introduce faulthandler_disabled() and use it to check for irq context and disabled pagefaults (via pagefault_disable()) in the pagefault handlers. Please note that we keep the in_atomic() checks in place - to detect whether in irq context (in which case preemption is always properly disabled). In contrast, preempt_disable() should never be used to disable pagefaults. With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt counter, and therefore the result of in_atomic() differs. We validate that condition by using might_fault() checks when calling might_sleep(). Therefore, add a comment to faulthandler_disabled(), describing why this is needed. faulthandler_disabled() and pagefault_disable() are defined in linux/uaccess.h, so let's properly add that include to all relevant files. This patch is based on a patch from Thomas Gleixner. Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: benh@kernel.crashing.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 15 1月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
Now that we have common ESR_ELx_* macros, move the core arm64 code over to them. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
- 21 11月, 2014 1 次提交
-
-
由 Will Deacon 提交于
Translation faults that occur due to the input address being outside of the address range mapped by the relevant base register are reported as level 0 faults in ESR.DFSC. If the faulting access cannot be resolved by the kernel (e.g. because it is not mapped by a vma), then we report "input address range fault" on the console. This was fine until we added support for 48-bit VAs, which actually place PGDs at level 0 and can trigger faults for invalid addresses that are within the range of the page tables. This patch changes the string to report "level 0 translation fault", which is far less confusing. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 7月, 2014 1 次提交
-
-
由 Jungseok Lee 提交于
This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NSteve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
- 16 5月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit bc07c2c6. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 5月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
The ARMv8 architecture allows execute-only user permissions by clearing the PTE_UXN and PTE_USER bits. The kernel, however, can still access such page, so execute-only page permission does not protect against read(2)/write(2) etc. accesses. Systems requiring such protection must implement/enable features like SECCOMP. This patch changes the arm64 __P100 and __S100 protection_map[] macros to the new __PAGE_EXECONLY attributes. A side effect is that pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't set. To work around this, the check is done on the PTE_NG bit via the pte_valid_ng() macro. VM_READ is also checked now for page faults. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault was caused by a write access and applications like Qemu rely on such information being provided in sigcontext. This patch introduces the ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly in compat sigcontext. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 9月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
This function is only called from arch/arm64/mm/fault.c. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 9月, 2013 2 次提交
-
-
由 Johannes Weiner 提交于
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Kernel faults are expected to handle OOM conditions gracefully (gup, uaccess etc.), so they should never invoke the OOM killer. Reserve this for faults triggered in user context when it is the only option. Most architectures already do this, fix up the remaining few. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 7月, 2013 1 次提交
-
-
由 Will Deacon 提交于
On arm64, cache maintenance faults appear as data aborts with the CM bit set in the ESR. The WnR bit, usually used to distinguish between faulting loads and stores, always reads as 1 and (slightly confusingly) the instructions are treated as reads by the architecture. This patch fixes our fault handling code to treat cache maintenance faults in the same way as loads. Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 6月, 2013 1 次提交
-
-
由 Steve Capper 提交于
Add huge page support to ARM64, different huge page sizes are supported depending on the size of normal pages: PAGE_SIZE is 4KB: 2MB - (pmds) these can be allocated at any time. 1024MB - (puds) usually allocated on bootup with the command line with something like: hugepagesz=1G hugepages=6 PAGE_SIZE is 64KB: 512MB - (pmds) usually allocated on bootup via command line. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 5月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
Currently user faults (page, undefined instruction) are always reported even though the user may have a signal handler for them. This patch adds unhandled_signal() check together with printk_ratelimit() for these cases. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 5月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
ESR.WnR bit is always set on data cache maintenance faults even though the page is not required to have write permission. If a translation fault (page not yet mapped) happens for read-only user address range, Linux incorrectly assumes a permission fault. This patch adds the check of the ESR.CM bit during the page fault handling to ignore the 'write' flag. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NTim Northover <Tim.Northover@arm.com> Cc: stable@vger.kernel.org
-
- 26 4月, 2013 1 次提交
-
-
由 Steve Capper 提交于
show_pte makes use of the *_none_or_clear_bad style functions. If a pgd, pud or pmd is identified as being bad, it will then be cleared. As show_pte appears to be called from either the user or kernel fault handlers this side effect can lead to unpredictable behaviour; especially as TLB entries are not invalidated. This patch removes the page table sanitisation from show_pte. If a bad pgd, pud or pmd is encountered it is left unmodified. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 11月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
For user space faults the kernel reports "unhandled page fault" and it gives the ESR value. With this patch the error message looked up in the fault info array to give a better description. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 9月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds support for the handling of the MMU faults (exception entry code introduced by a previous patch) and page table management. The user translation table is pointed to by TTBR0 and the kernel one (swapper_pg_dir) by TTBR1. There is no translation information shared or address space overlapping between user and kernel page tables. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-