1. 12 6月, 2017 4 次提交
  2. 30 5月, 2017 1 次提交
  3. 07 4月, 2017 1 次提交
    • S
      arm64: print a fault message when attempting to write RO memory · b824b930
      Stephen Boyd 提交于
      If a page is marked read only we should print out that fact,
      instead of printing out that there was a page fault. Right now we
      get a cryptic error message that something went wrong with an
      unhandled fault, but we don't evaluate the esr to figure out that
      it was a read/write permission fault.
      
      Instead of seeing:
      
        Unable to handle kernel paging request at virtual address ffff000008e460d8
        pgd = ffff800003504000
        [ffff000008e460d8] *pgd=0000000083473003, *pud=0000000083503003, *pmd=0000000000000000
        Internal error: Oops: 9600004f [#1] PREEMPT SMP
      
      we'll see:
      
        Unable to handle kernel write to read-only memory at virtual address ffff000008e760d8
        pgd = ffff80003d3de000
        [ffff000008e760d8] *pgd=0000000083472003, *pud=0000000083435003, *pmd=0000000000000000
        Internal error: Oops: 9600004f [#1] PREEMPT SMP
      
      We also add a userspace address check into is_permission_fault()
      so that the function doesn't return true for ttbr0 PAN faults
      when it shouldn't.
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NStephen Boyd <stephen.boyd@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b824b930
  4. 04 4月, 2017 1 次提交
    • V
      arm64: mm: unaligned access by user-land should be received as SIGBUS · 09a6adf5
      Victor Kamensky 提交于
      After 52d7523d (arm64: mm: allow the kernel to handle alignment faults on
      user accesses) commit user-land accesses that produce unaligned exceptions
      like in case of aarch32 ldm/stm/ldrd/strd instructions operating on
      unaligned memory received by user-land as SIGSEGV. It is wrong, it should
      be reported as SIGBUS as it was before 52d7523d commit.
      
      Changed do_bad_area function to take signal and code parameters out of esr
      value using fault_info table, so in case of do_alignment_fault fault
      user-land will receive SIGBUS. Wrapped access to fault_info table into
      esr_to_fault_info function.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 52d7523d (arm64: mm: allow the kernel to handle alignment faults on user accesses)
      Signed-off-by: NVictor Kamensky <kamensky@cisco.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      09a6adf5
  5. 02 3月, 2017 2 次提交
  6. 10 1月, 2017 1 次提交
    • J
      arm64: Remove useless UAO IPI and describe how this gets enabled · c8b06e3f
      James Morse 提交于
      Since its introduction, the UAO enable call was broken, and useless.
      commit 2a6dcb2b ("arm64: cpufeature: Schedule enable() calls instead
      of calling them via IPI"), fixed the framework so that these calls
      are scheduled, so that they can modify PSTATE.
      
      Now it is just useless. Remove it. UAO is enabled by the code patching
      which causes get_user() and friends to use the 'ldtr' family of
      instructions. This relies on the PSTATE.UAO bit being set to match
      addr_limit, which we do in uao_thread_switch() called via __switch_to().
      
      All that is needed to enable UAO is patch the code, and call schedule().
      __apply_alternatives_multi_stop() calls stop_machine() when it modifies
      the kernel text to enable the alternatives, (including the UAO code in
      uao_thread_switch()). Once stop_machine() has finished __switch_to() is
      called to reschedule the original task, this causes PSTATE.UAO to be set
      appropriately. An explicit enable() call is not needed.
      Reported-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      c8b06e3f
  7. 05 1月, 2017 1 次提交
  8. 22 11月, 2016 2 次提交
  9. 20 10月, 2016 2 次提交
  10. 20 9月, 2016 1 次提交
  11. 26 8月, 2016 1 次提交
    • C
      arm64: Introduce execute-only page access permissions · cab15ce6
      Catalin Marinas 提交于
      The ARMv8 architecture allows execute-only user permissions by clearing
      the PTE_UXN and PTE_USER bits. However, the kernel running on a CPU
      implementation without User Access Override (ARMv8.2 onwards) can still
      access such page, so execute-only page permission does not protect
      against read(2)/write(2) etc. accesses. Systems requiring such
      protection must enable features like SECCOMP.
      
      This patch changes the arm64 __P100 and __S100 protection_map[] macros
      to the new __PAGE_EXECONLY attributes. A side effect is that
      pte_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't
      set. To work around this, the check is done on the PTE_NG bit via the
      pte_ng() macro. VM_READ is also checked now for page faults.
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cab15ce6
  12. 13 8月, 2016 1 次提交
    • L
      arm64: Handle el1 synchronous instruction aborts cleanly · 9adeb8e7
      Laura Abbott 提交于
      Executing from a non-executable area gives an ugly message:
      
      lkdtm: Performing direct entry EXEC_RODATA
      lkdtm: attempting ok execution at ffff0000084c0e08
      lkdtm: attempting bad execution at ffff000008880700
      Bad mode in Synchronous Abort handler detected on CPU2, code 0x8400000e -- IABT (current EL)
      CPU: 2 PID: 998 Comm: sh Not tainted 4.7.0-rc2+ #13
      Hardware name: linux,dummy-virt (DT)
      task: ffff800077e35780 ti: ffff800077970000 task.ti: ffff800077970000
      PC is at lkdtm_rodata_do_nothing+0x0/0x8
      LR is at execute_location+0x74/0x88
      
      The 'IABT (current EL)' indicates the error but it's a bit cryptic
      without knowledge of the ARM ARM. There is also no indication of the
      specific address which triggered the fault. The increase in kernel
      page permissions makes hitting this case more likely as well.
      Handling the case in the vectors gives a much more familiar looking
      error message:
      
      lkdtm: Performing direct entry EXEC_RODATA
      lkdtm: attempting ok execution at ffff0000084c0840
      lkdtm: attempting bad execution at ffff000008880680
      Unable to handle kernel paging request at virtual address ffff000008880680
      pgd = ffff8000089b2000
      [ffff000008880680] *pgd=00000000489b4003, *pud=0000000048904003, *pmd=0000000000000000
      Internal error: Oops: 8400000e [#1] PREEMPT SMP
      Modules linked in:
      CPU: 1 PID: 997 Comm: sh Not tainted 4.7.0-rc1+ #24
      Hardware name: linux,dummy-virt (DT)
      task: ffff800077f9f080 ti: ffff800008a1c000 task.ti: ffff800008a1c000
      PC is at lkdtm_rodata_do_nothing+0x0/0x8
      LR is at execute_location+0x74/0x88
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      9adeb8e7
  13. 27 7月, 2016 1 次提交
  14. 19 7月, 2016 1 次提交
    • S
      arm64: Kprobes with single stepping support · 2dd0e8d2
      Sandeepa Prabhu 提交于
      Add support for basic kernel probes(kprobes) and jump probes
      (jprobes) for ARM64.
      
      Kprobes utilizes software breakpoint and single step debug
      exceptions supported on ARM v8.
      
      A software breakpoint is placed at the probe address to trap the
      kernel execution into the kprobe handler.
      
      ARM v8 supports enabling single stepping before the break exception
      return (ERET), with next PC in exception return address (ELR_EL1). The
      kprobe handler prepares an executable memory slot for out-of-line
      execution with a copy of the original instruction being probed, and
      enables single stepping. The PC is set to the out-of-line slot address
      before the ERET. With this scheme, the instruction is executed with the
      exact same register context except for the PC (and DAIF) registers.
      
      Debug mask (PSTATE.D) is enabled only when single stepping a recursive
      kprobe, e.g.: during kprobes reenter so that probed instruction can be
      single stepped within the kprobe handler -exception- context.
      The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
      any further re-entry is prevented by not calling handlers and the case
      counted as a missed kprobe).
      
      Single stepping from the x-o-l slot has a drawback for PC-relative accesses
      like branching and symbolic literals access as the offset from the new PC
      (slot address) may not be ensured to fit in the immediate value of
      the opcode. Such instructions need simulation, so reject
      probing them.
      
      Instructions generating exceptions or cpu mode change are rejected
      for probing.
      
      Exclusive load/store instructions are rejected too.  Additionally, the
      code is checked to see if it is inside an exclusive load/store sequence
      (code from Pratyush).
      
      System instructions are mostly enabled for stepping, except MSR/MRS
      accesses to "DAIF" flags in PSTATE, which are not safe for
      probing.
      
      This also changes arch/arm64/include/asm/ptrace.h to use
      include/asm-generic/ptrace.h.
      
      Thanks to Steve Capper and Pratyush Anand for several suggested
      Changes.
      Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2dd0e8d2
  15. 07 7月, 2016 1 次提交
  16. 22 6月, 2016 2 次提交
    • M
      arm64: kill ESR_LNX_EXEC · 541ec870
      Mark Rutland 提交于
      Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing
      instruction aborts from data aborts, but this bit is architecturally
      RES0 for instruction aborts, and could be allocated for an arbitrary
      purpose in future. Additionally, we hard-code the value in entry.S
      without the mnemonic, making the code difficult to understand.
      
      Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr,
      which we already pass to the sole use of ESR_LNX_EXEC. A new helper,
      is_el0_instruction_abort() is added to make the logic clear. Any
      instruction aborts taken from EL1 will already have been handled by
      bad_mode, so we need not handle that case in the helper.
      
      For consistency, the existing permission_fault helper is renamed to
      is_permission_fault, and the return type is changed to bool. There
      should be no functional changes as the return value was a boolean
      expression, and the result is only used in another boolean expression.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Dave P Martin <dave.martin@arm.com>
      Cc: Huang Shijie <shijie.huang@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      541ec870
    • M
      arm64: add macro to extract ESR_ELx.EC · 275f344b
      Mark Rutland 提交于
      Several places open-code extraction of the EC field from an ESR_ELx
      value, in subtly different ways. This is unfortunate duplication and
      variation, and the precise logic used to extract the field is a
      distraction.
      
      This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from
      an ESR_ELx value in a consistent fashion.
      
      Existing open-coded extractions in core arm64 code are moved over to the
      new helper. KVM code is left as-is for the moment.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NHuang Shijie <shijie.huang@arm.com>
      Cc: Dave P Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      275f344b
  17. 14 6月, 2016 1 次提交
    • M
      arm64: mm: mark fault_info table const · bbb1681e
      Mark Rutland 提交于
      Unlike the debug_fault_info table, we never intentionally alter the
      fault_info table at runtime, and all derived pointers are treated as
      const currently.
      
      Make the table const so that it can be placed in .rodata and protected
      from unintentional writes, as we do for the syscall tables.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bbb1681e
  18. 08 6月, 2016 1 次提交
    • W
      arm64: mm: always take dirty state from new pte in ptep_set_access_flags · 0106d456
      Will Deacon 提交于
      Commit 66dbd6e6 ("arm64: Implement ptep_set_access_flags() for
      hardware AF/DBM") ensured that pte flags are updated atomically in the
      face of potential concurrent, hardware-assisted updates. However, Alex
      reports that:
      
       | This patch breaks swapping for me.
       | In the broken case, you'll see either systemd cpu time spike (because
       | it's stuck in a page fault loop) or the system hang (because the
       | application owning the screen is stuck in a page fault loop).
      
      It turns out that this is because the 'dirty' argument to
      ptep_set_access_flags is always 0 for read faults, and so we can't use
      it to set PTE_RDONLY. The failing sequence is:
      
        1. We put down a PTE_WRITE | PTE_DIRTY | PTE_AF pte
        2. Memory pressure -> pte_mkold(pte) -> clear PTE_AF
        3. A read faults due to the missing access flag
        4. ptep_set_access_flags is called with dirty = 0, due to the read fault
        5. pte is then made PTE_WRITE | PTE_DIRTY | PTE_AF | PTE_RDONLY (!)
        6. A write faults, but pte_write is true so we get stuck
      
      The solution is to check the new page table entry (as would be done by
      the generic, non-atomic definition of ptep_set_access_flags that just
      calls set_pte_at) to establish the dirty state.
      
      Cc: <stable@vger.kernel.org> # 4.3+
      Fixes: 66dbd6e6 ("arm64: Implement ptep_set_access_flags() for hardware AF/DBM")
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NAlexander Graf <agraf@suse.de>
      Tested-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0106d456
  19. 19 4月, 2016 1 次提交
  20. 16 4月, 2016 1 次提交
    • C
      arm64: Implement ptep_set_access_flags() for hardware AF/DBM · 66dbd6e6
      Catalin Marinas 提交于
      When hardware updates of the access and dirty states are enabled, the
      default ptep_set_access_flags() implementation based on calling
      set_pte_at() directly is potentially racy. This triggers the "racy dirty
      state clearing" warning in set_pte_at() because an existing writable PTE
      is overridden with a clean entry.
      
      There are two main scenarios for this situation:
      
      1. The CPU getting an access fault does not support hardware updates of
         the access/dirty flags. However, a different agent in the system
         (e.g. SMMU) can do this, therefore overriding a writable entry with a
         clean one could potentially lose the automatically updated dirty
         status
      
      2. A more complex situation is possible when all CPUs support hardware
         AF/DBM:
      
         a) Initial state: shareable + writable vma and pte_none(pte)
         b) Read fault taken by two threads of the same process on different
            CPUs
         c) CPU0 takes the mmap_sem and proceeds to handling the fault. It
            eventually reaches do_set_pte() which sets a writable + clean pte.
            CPU0 releases the mmap_sem
         d) CPU1 acquires the mmap_sem and proceeds to handle_pte_fault(). The
            pte entry it reads is present, writable and clean and it continues
            to pte_mkyoung()
         e) CPU1 calls ptep_set_access_flags()
      
         If between (d) and (e) the hardware (another CPU) updates the dirty
         state (clears PTE_RDONLY), CPU1 will override the PTR_RDONLY bit
         marking the entry clean again.
      
      This patch implements an arm64-specific ptep_set_access_flags() function
      to perform an atomic update of the PTE flags.
      
      Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits")
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NMing Lei <tom.leiming@gmail.com>
      Tested-by: NJulien Grall <julien.grall@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org> # 4.3+
      [will: reworded comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      66dbd6e6
  21. 15 4月, 2016 1 次提交
    • J
      arm64: mm: Add trace_irqflags annotations to do_debug_exception() · 6afedcd2
      James Morse 提交于
      With CONFIG_PROVE_LOCKING, CONFIG_DEBUG_LOCKDEP and CONFIG_TRACE_IRQFLAGS
      enabled, lockdep will compare current->hardirqs_enabled with the flags from
      local_irq_save().
      
      When a debug exception occurs, interrupts are disabled in entry.S, but
      lockdep isn't told, resulting in:
      DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)
      ------------[ cut here ]------------
      WARNING: at ../kernel/locking/lockdep.c:3523
      Modules linked in:
      CPU: 3 PID: 1752 Comm: perf Not tainted 4.5.0-rc4+ #2204
      Hardware name: ARM Juno development board (r1) (DT)
      task: ffffffc974868000 ti: ffffffc975f40000 task.ti: ffffffc975f40000
      PC is at check_flags.part.35+0x17c/0x184
      LR is at check_flags.part.35+0x17c/0x184
      pc : [<ffffff80080fc93c>] lr : [<ffffff80080fc93c>] pstate: 600003c5
      [...]
      ---[ end trace 74631f9305ef5020 ]---
      Call trace:
      [<ffffff80080fc93c>] check_flags.part.35+0x17c/0x184
      [<ffffff80080ffe30>] lock_acquire+0xa8/0xc4
      [<ffffff8008093038>] breakpoint_handler+0x118/0x288
      [<ffffff8008082434>] do_debug_exception+0x3c/0xa8
      [<ffffff80080854b4>] el1_dbg+0x18/0x6c
      [<ffffff80081e82f4>] do_filp_open+0x64/0xdc
      [<ffffff80081d6e60>] do_sys_open+0x140/0x204
      [<ffffff80081d6f58>] SyS_openat+0x10/0x18
      [<ffffff8008085d30>] el0_svc_naked+0x24/0x28
      possible reason: unannotated irqs-off.
      irq event stamp: 65857
      hardirqs last  enabled at (65857): [<ffffff80081fb1c0>] lookup_mnt+0xf4/0x1b4
      hardirqs last disabled at (65856): [<ffffff80081fb188>] lookup_mnt+0xbc/0x1b4
      softirqs last  enabled at (65790): [<ffffff80080bdca4>] __do_softirq+0x1f8/0x290
      softirqs last disabled at (65757): [<ffffff80080be038>] irq_exit+0x9c/0xd0
      
      This patch adds the annotations to do_debug_exception(), while trying not
      to call trace_hardirqs_off() if el1_dbg() interrupted a task that already
      had irqs disabled.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6afedcd2
  22. 18 3月, 2016 1 次提交
  23. 19 2月, 2016 5 次提交
    • C
      arm64: User die() instead of panic() in do_page_fault() · 70c8abc2
      Catalin Marinas 提交于
      The former gives better error reporting on unhandled permission faults
      (introduced by the UAO patches).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      70c8abc2
    • E
      arm64: mm: allow the kernel to handle alignment faults on user accesses · 52d7523d
      EunTaik Lee 提交于
      Although we don't expect to take alignment faults on access to normal
      memory, misbehaving (i.e. buggy) user code can pass MMIO pointers into
      system calls, leading to things like get_user accessing device memory.
      
      Rather than OOPS the kernel, allow any exception fixups to run and
      return something like -EFAULT back to userspace. This makes the
      behaviour more consistent with userspace, even though applications with
      access to device mappings can easily cause other issues if they try
      hard enough.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NEun Taik Lee <eun.taik.lee@samsung.com>
      [will: dropped __kprobes annotation and rewrote commit mesage]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      52d7523d
    • C
      arm64: Remove the get_thread_info() function · e950631e
      Catalin Marinas 提交于
      This function was introduced by previous commits implementing UAO.
      However, it can be replaced with task_thread_info() in
      uao_thread_switch() or get_fs() in do_page_fault() (the latter being
      called only on the current context, so no need for using the saved
      pt_regs).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e950631e
    • J
      arm64: kernel: Don't toggle PAN on systems with UAO · 70544196
      James Morse 提交于
      If a CPU supports both Privileged Access Never (PAN) and User Access
      Override (UAO), we don't need to disable/re-enable PAN round all
      copy_to_user() like calls.
      
      UAO alternatives cause these calls to use the 'unprivileged' load/store
      instructions, which are overridden to be the privileged kind when
      fs==KERNEL_DS.
      
      This patch changes the copy_to_user() calls to have their PAN toggling
      depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
      
      If both features are detected, PAN will be enabled, but the copy_to_user()
      alternatives will not be applied. This means PAN will be enabled all the
      time for these functions. If only PAN is detected, the toggling will be
      enabled as normal.
      
      This will save the time taken to disable/re-enable PAN, and allow us to
      catch copy_to_user() accesses that occur with fs==KERNEL_DS.
      
      Futex and swp-emulation code continue to hang their PAN toggling code on
      ARM64_HAS_PAN.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      70544196
    • J
      arm64: kernel: Add support for User Access Override · 57f4959b
      James Morse 提交于
      'User Access Override' is a new ARMv8.2 feature which allows the
      unprivileged load and store instructions to be overridden to behave in
      the normal way.
      
      This patch converts {get,put}_user() and friends to use ldtr*/sttr*
      instructions - so that they can only access EL0 memory, then enables
      UAO when fs==KERNEL_DS so that these functions can access kernel memory.
      
      This allows user space's read/write permissions to be checked against the
      page tables, instead of testing addr<USER_DS, then using the kernel's
      read/write permissions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      57f4959b
  24. 25 11月, 2015 1 次提交
  25. 21 10月, 2015 1 次提交
    • S
      arm64: Delay cpu feature capability checks · dbb4e152
      Suzuki K. Poulose 提交于
      At the moment we run through the arm64_features capability list for
      each CPU and set the capability if one of the CPU supports it. This
      could be problematic in a heterogeneous system with differing capabilities.
      Delay the CPU feature checks until all the enabled CPUs are up(i.e,
      smp_cpus_done(), so that we can make better decisions based on the
      overall system capability. Once we decide and advertise the capabilities
      the alternatives can be applied. From this state, we cannot roll back
      a feature to disabled based on the values from a new hotplugged CPU,
      due to the runtime patching and other reasons. So, for all new CPUs,
      we need to make sure that they have the established system capabilities.
      Failing which, we bring the CPU down, preventing it from turning online.
      Once the capabilities are decided, any new CPU booting up goes through
      verification to ensure that it has all the enabled capabilities and also
      invokes the respective enable() method on the CPU.
      
      The CPU errata checks are not delayed and is still executed per-CPU
      to detect the respective capabilities. If we ever come across a non-errata
      capability that needs to be checked on each-CPU, we could introduce them via
      a new capability table(or introduce a flag), which can be processed per CPU.
      
      The next patch will make the feature checks use the system wide
      safe value of a feature register.
      
      NOTE: The enable() methods associated with the capability is scheduled
      on all the CPUs (which is the only use case at the moment). If we need
      a different type of 'enable()' which only needs to be run once on any CPU,
      we should be able to handle that when needed.
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Tested-by: NDave Martin <Dave.Martin@arm.com>
      [catalin.marinas@arm.com: static variable and coding style fixes]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dbb4e152
  26. 05 10月, 2015 1 次提交
    • M
      arm64: readahead: fault retry breaks mmap file read random detection · 569ba74a
      Mark Salyzyn 提交于
      This is the arm64 portion of commit 45cac65b ("readahead: fault
      retry breaks mmap file read random detection"), which was absent from
      the initial port and has since gone unnoticed. The original commit says:
      
      > .fault now can retry.  The retry can break state machine of .fault.  In
      > filemap_fault, if page is miss, ra->mmap_miss is increased.  In the second
      > try, since the page is in page cache now, ra->mmap_miss is decreased.  And
      > these are done in one fault, so we can't detect random mmap file access.
      >
      > Add a new flag to indicate .fault is tried once.  In the second try, skip
      > ra->mmap_miss decreasing.  The filemap_fault state machine is ok with it.
      
      With this change, Mark reports that:
      
      > Random read improves by 250%, sequential read improves by 40%, and
      > random write by 400% to an eMMC device with dm crypto wrapped around it.
      
      Cc: Shaohua Li <shli@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMark Salyzyn <salyzyn@android.com>
      Signed-off-by: NRiley Andrews <riandrews@android.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      569ba74a
  27. 27 7月, 2015 2 次提交
    • D
      arm64/BUG: Use BRK instruction for generic BUG traps · 9fb7410f
      Dave P Martin 提交于
      Currently, the minimal default BUG() implementation from asm-
      generic is used for arm64.
      
      This patch uses the BRK software breakpoint instruction to generate
      a trap instead, similarly to most other arches, with the generic
      BUG code generating the dmesg boilerplate.
      
      This allows bug metadata to be moved to a separate table and
      reduces the amount of inline code at BUG and WARN sites.  This also
      avoids clobbering any registers before they can be dumped.
      
      To mitigate the size of the bug table further, this patch makes
      use of the existing infrastructure for encoding addresses within
      the bug table as 32-bit offsets instead of absolute pointers.
      (Note that this limits the kernel size to 2GB.)
      
      Traps are registered at arch_initcall time for aarch64, but BUG
      has minimal real dependencies and it is desirable to be able to
      generate bug splats as early as possible.  This patch redirects
      all debug exceptions caused by BRK directly to bug_handler() until
      the full debug exception support has been initialised.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9fb7410f
    • J
      arm64: kernel: Add support for Privileged Access Never · 338d4f49
      James Morse 提交于
      'Privileged Access Never' is a new arm8.1 feature which prevents
      privileged code from accessing any virtual address where read or write
      access is also permitted at EL0.
      
      This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
      helpers temporarily to permit access.
      
      This will catch kernel bugs where user memory is accessed directly.
      'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [will: use ALTERNATIVE in asm and tidy up pan_enable check]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      338d4f49
  28. 04 7月, 2015 1 次提交