1. 05 10月, 2015 1 次提交
    • M
      arm64: readahead: fault retry breaks mmap file read random detection · 569ba74a
      Mark Salyzyn 提交于
      This is the arm64 portion of commit 45cac65b ("readahead: fault
      retry breaks mmap file read random detection"), which was absent from
      the initial port and has since gone unnoticed. The original commit says:
      
      > .fault now can retry.  The retry can break state machine of .fault.  In
      > filemap_fault, if page is miss, ra->mmap_miss is increased.  In the second
      > try, since the page is in page cache now, ra->mmap_miss is decreased.  And
      > these are done in one fault, so we can't detect random mmap file access.
      >
      > Add a new flag to indicate .fault is tried once.  In the second try, skip
      > ra->mmap_miss decreasing.  The filemap_fault state machine is ok with it.
      
      With this change, Mark reports that:
      
      > Random read improves by 250%, sequential read improves by 40%, and
      > random write by 400% to an eMMC device with dm crypto wrapped around it.
      
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMark Salyzyn <salyzyn@android.com>
      Signed-off-by: NRiley Andrews <riandrews@android.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      569ba74a
  2. 27 7月, 2015 2 次提交
    • D
      arm64/BUG: Use BRK instruction for generic BUG traps · 9fb7410f
      Dave P Martin 提交于
      Currently, the minimal default BUG() implementation from asm-
      generic is used for arm64.
      
      This patch uses the BRK software breakpoint instruction to generate
      a trap instead, similarly to most other arches, with the generic
      BUG code generating the dmesg boilerplate.
      
      This allows bug metadata to be moved to a separate table and
      reduces the amount of inline code at BUG and WARN sites.  This also
      avoids clobbering any registers before they can be dumped.
      
      To mitigate the size of the bug table further, this patch makes
      use of the existing infrastructure for encoding addresses within
      the bug table as 32-bit offsets instead of absolute pointers.
      (Note that this limits the kernel size to 2GB.)
      
      Traps are registered at arch_initcall time for aarch64, but BUG
      has minimal real dependencies and it is desirable to be able to
      generate bug splats as early as possible.  This patch redirects
      all debug exceptions caused by BRK directly to bug_handler() until
      the full debug exception support has been initialised.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9fb7410f
    • J
      arm64: kernel: Add support for Privileged Access Never · 338d4f49
      James Morse 提交于
      'Privileged Access Never' is a new arm8.1 feature which prevents
      privileged code from accessing any virtual address where read or write
      access is also permitted at EL0.
      
      This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
      helpers temporarily to permit access.
      
      This will catch kernel bugs where user memory is accessed directly.
      'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [will: use ALTERNATIVE in asm and tidy up pan_enable check]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      338d4f49
  3. 04 7月, 2015 1 次提交
  4. 19 6月, 2015 2 次提交
  5. 19 5月, 2015 1 次提交
    • D
      mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in the handler · 70ffdb93
      David Hildenbrand 提交于
      Introduce faulthandler_disabled() and use it to check for irq context and
      disabled pagefaults (via pagefault_disable()) in the pagefault handlers.
      
      Please note that we keep the in_atomic() checks in place - to detect
      whether in irq context (in which case preemption is always properly
      disabled).
      
      In contrast, preempt_disable() should never be used to disable pagefaults.
      With !CONFIG_PREEMPT_COUNT, preempt_disable() doesn't modify the preempt
      counter, and therefore the result of in_atomic() differs.
      We validate that condition by using might_fault() checks when calling
      might_sleep().
      
      Therefore, add a comment to faulthandler_disabled(), describing why this
      is needed.
      
      faulthandler_disabled() and pagefault_disable() are defined in
      linux/uaccess.h, so let's properly add that include to all relevant files.
      
      This patch is based on a patch from Thomas Gleixner.
      Reviewed-and-tested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: David.Laight@ACULAB.COM
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: airlied@linux.ie
      Cc: akpm@linux-foundation.org
      Cc: benh@kernel.crashing.org
      Cc: bigeasy@linutronix.de
      Cc: borntraeger@de.ibm.com
      Cc: daniel.vetter@intel.com
      Cc: heiko.carstens@de.ibm.com
      Cc: herbert@gondor.apana.org.au
      Cc: hocko@suse.cz
      Cc: hughd@google.com
      Cc: mst@redhat.com
      Cc: paulus@samba.org
      Cc: ralf@linux-mips.org
      Cc: schwidefsky@de.ibm.com
      Cc: yang.shi@windriver.com
      Link: http://lkml.kernel.org/r/1431359540-32227-7-git-send-email-dahi@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      70ffdb93
  6. 15 1月, 2015 1 次提交
  7. 21 11月, 2014 1 次提交
    • W
      arm64: mm: report unhandled level-0 translation faults correctly · 7f73f7ae
      Will Deacon 提交于
      Translation faults that occur due to the input address being outside
      of the address range mapped by the relevant base register are reported
      as level 0 faults in ESR.DFSC.
      
      If the faulting access cannot be resolved by the kernel (e.g. because
      it is not mapped by a vma), then we report "input address range fault"
      on the console. This was fine until we added support for 48-bit VAs,
      which actually place PGDs at level 0 and can trigger faults for invalid
      addresses that are within the range of the page tables.
      
      This patch changes the string to report "level 0 translation fault",
      which is far less confusing.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7f73f7ae
  8. 23 7月, 2014 1 次提交
    • J
      arm64: mm: Implement 4 levels of translation tables · c79b954b
      Jungseok Lee 提交于
      This patch implements 4 levels of translation tables since 3 levels
      of page tables with 4KB pages cannot support 40-bit physical address
      space described in [1] due to the following issue.
      
      It is a restriction that kernel logical memory map with 4KB + 3 levels
      (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
      544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
      mapping for this region in map_mem function since __phys_to_virt for
      this region reaches to address overflow.
      
      If SoC design follows the document, [1], over 32GB RAM would be placed
      from 544GB. Even 64GB system is supposed to use the region from 544GB
      to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
      of page tables to avoid hacking __virt_to_phys and __phys_to_virt.
      
      However, it is recommended 4 levels of page table should be only enabled
      if memory map is too sparse or there is about 512GB RAM.
      
      References
      ----------
      [1]: Principles of ARM Memory Maps, White Paper, Issue C
      Signed-off-by: NJungseok Lee <jays.lee@samsung.com>
      Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com>
      Acked-by: NKukjin Kim <kgene.kim@samsung.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Reviewed-by: NSteve Capper <steve.capper@linaro.org>
      [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
      [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
      [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
      c79b954b
  9. 16 5月, 2014 1 次提交
  10. 09 5月, 2014 2 次提交
    • C
      arm64: Introduce execute-only page access permissions · bc07c2c6
      Catalin Marinas 提交于
      The ARMv8 architecture allows execute-only user permissions by clearing
      the PTE_UXN and PTE_USER bits. The kernel, however, can still access
      such page, so execute-only page permission does not protect against
      read(2)/write(2) etc. accesses. Systems requiring such protection must
      implement/enable features like SECCOMP.
      
      This patch changes the arm64 __P100 and __S100 protection_map[] macros
      to the new __PAGE_EXECONLY attributes. A side effect is that
      pte_valid_user() no longer triggers for __PAGE_EXECONLY since PTE_USER
      isn't set. To work around this, the check is done on the PTE_NG bit via
      the pte_valid_ng() macro. VM_READ is also checked now for page faults.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bc07c2c6
    • C
      arm64: Provide read/write fault information in compat signal handlers · 9141300a
      Catalin Marinas 提交于
      For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault
      was caused by a write access and applications like Qemu rely on such
      information being provided in sigcontext. This patch introduces the
      ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly
      in compat sigcontext.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9141300a
  11. 20 9月, 2013 1 次提交
  12. 13 9月, 2013 2 次提交
  13. 19 7月, 2013 1 次提交
  14. 14 6月, 2013 1 次提交
  15. 25 5月, 2013 1 次提交
  16. 08 5月, 2013 1 次提交
  17. 26 4月, 2013 1 次提交
    • S
      arm64: mm: Correct show_pte behaviour · 4339e3f3
      Steve Capper 提交于
      show_pte makes use of the *_none_or_clear_bad style functions. If a
      pgd, pud or pmd is identified as being bad, it will then be cleared.
      
      As show_pte appears to be called from either the user or kernel
      fault handlers this side effect can lead to unpredictable behaviour;
      especially as TLB entries are not invalidated.
      
      This patch removes the page table sanitisation from show_pte. If a
      bad pgd, pud or pmd is encountered it is left unmodified.
      Signed-off-by: NSteve Capper <steve.capper@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4339e3f3
  18. 14 11月, 2012 1 次提交
  19. 17 9月, 2012 1 次提交