1. 05 12月, 2019 1 次提交
    • W
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 64694b27
      Will Deacon 提交于
      [ Upstream commit 7faa313f05cad184e8b17750f0cbe5216ac6debb ]
      
      Commit 396244692232 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: N"kernelci.org bot" <bot@kernelci.org>
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      64694b27
  2. 26 7月, 2019 1 次提交
  3. 14 11月, 2018 1 次提交
  4. 26 7月, 2018 1 次提交
  5. 12 7月, 2018 7 次提交
  6. 11 7月, 2018 1 次提交
  7. 01 6月, 2018 4 次提交
  8. 07 2月, 2018 5 次提交
  9. 06 2月, 2018 1 次提交
  10. 17 1月, 2018 1 次提交
    • C
      arm64: kpti: Fix the interaction between ASID switching and software PAN · 6b88a32c
      Catalin Marinas 提交于
      With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the
      active ASID to decide whether user access was enabled (non-zero ASID)
      when the exception was taken. On return from exception, if user access
      was previously disabled, it re-instates TTBR0_EL1 from the per-thread
      saved value (updated in switch_mm() or efi_set_pgd()).
      
      Commit 7655abb9 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a
      TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e7
      ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the
      __uaccess_ttbr0_disable() function and asm macro to first write the
      reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an
      exception occurs between these two, the exception return code will
      re-instate a valid TTBR0_EL1. Similar scenario can happen in
      cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID
      update in cpu_do_switch_mm().
      
      This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and
      disables the interrupts around the TTBR0_EL1 and ASID switching code in
      __uaccess_ttbr0_disable(). It also ensures that, when returning from the
      EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in
      TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}.
      
      The accesses to current_thread_info()->ttbr0 are updated to use
      READ_ONCE/WRITE_ONCE.
      
      As a safety measure, __uaccess_ttbr0_enable() always masks out any
      existing non-zero ASID TTBR1_EL1 before writing in the new ASID.
      
      Fixes: 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN")
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6b88a32c
  11. 15 1月, 2018 2 次提交
    • S
      arm64: entry: Move the trampoline to be before PAN · 1e1b8c04
      Steve Capper 提交于
      The trampoline page tables are positioned after the early page tables in
      the kernel linker script.
      
      As we are about to change the early page table logic to resolve the
      swapper size at link time as opposed to compile time, the
      SWAPPER_DIR_SIZE variable (currently used to locate the trampline)
      will be rendered unsuitable for low level assembler.
      
      This patch solves this issue by moving the trampoline before the PAN
      page tables. The offset to the trampoline from ttbr1 can then be
      expressed by: PAGE_SIZE + RESERVED_TTBR0_SIZE, which is available to the
      entry assembler.
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1e1b8c04
    • J
      arm64: sdei: Add trampoline code for remapping the kernel · 79e9aa59
      James Morse 提交于
      When CONFIG_UNMAP_KERNEL_AT_EL0 is set the SDEI entry point and the rest
      of the kernel may be unmapped when we take an event. If this may be the
      case, use an entry trampoline that can switch to the kernel page tables.
      
      We can't use the provided PSTATE to determine whether to switch page
      tables as we may have interrupted the kernel's entry trampoline, (or a
      normal-priority event that interrupted the kernel's entry trampoline).
      Instead test for a user ASID in ttbr1_el1.
      
      Save a value in regs->addr_limit to indicate whether we need to restore
      the original ASID when returning from this event. This value is only used
      by do_page_fault(), which we don't call with the SDEI regs.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      79e9aa59
  12. 13 1月, 2018 1 次提交
    • J
      arm64: kernel: Add arch-specific SDEI entry code and CPU masking · f5df2696
      James Morse 提交于
      The Software Delegated Exception Interface (SDEI) is an ARM standard
      for registering callbacks from the platform firmware into the OS.
      This is typically used to implement RAS notifications.
      
      Such notifications enter the kernel at the registered entry-point
      with the register values of the interrupted CPU context. Because this
      is not a CPU exception, it cannot reuse the existing entry code.
      (crucially we don't implicitly know which exception level we interrupted),
      
      Add the entry point to entry.S to set us up for calling into C code. If
      the event interrupted code that had interrupts masked, we always return
      to that location. Otherwise we pretend this was an IRQ, and use SDEI's
      complete_and_resume call to return to vbar_el1 + offset.
      
      This allows the kernel to deliver signals to user space processes. For
      KVM this triggers the world switch, a quick spin round vcpu_run, then
      back into the guest, unless there are pending signals.
      
      Add sdei_mask_local_cpu() calls to the smp_send_stop() code, this covers
      the panic() code-path, which doesn't invoke cpuhotplug notifiers.
      
      Because we can interrupt entry-from/exit-to another EL, we can't trust the
      value in sp_el0 or x29, even if we interrupted the kernel, in this case
      the code in entry.S will save/restore sp_el0 and use the value in
      __entry_task.
      
      When we have VMAP stacks we can interrupt the stack-overflow test, which
      stirs x0 into sp, meaning we have to have our own VMAP stacks. For now
      these are allocated when we probe the interface. Future patches will add
      refcounting hooks to allow the arch code to allocate them lazily.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f5df2696
  13. 09 1月, 2018 3 次提交
  14. 11 12月, 2017 9 次提交
  15. 03 11月, 2017 2 次提交
    • D
      arm64/sve: Detect SVE and activate runtime support · 43994d82
      Dave Martin 提交于
      This patch enables detection of hardware SVE support via the
      cpufeatures framework, and reports its presence to the kernel and
      userspace via the new ARM64_SVE cpucap and HWCAP_SVE hwcap
      respectively.
      
      Userspace can also detect SVE using ID_AA64PFR0_EL1, using the
      cpufeatures MRS emulation.
      
      When running on hardware that supports SVE, this enables runtime
      kernel support for SVE, and allows user tasks to execute SVE
      instructions and make of the of the SVE-specific user/kernel
      interface extensions implemented by this series.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      43994d82
    • D
      arm64/sve: Core task context handling · bc0ee476
      Dave Martin 提交于
      This patch adds the core support for switching and managing the SVE
      architectural state of user tasks.
      
      Calls to the existing FPSIMD low-level save/restore functions are
      factored out as new functions task_fpsimd_{save,load}(), since SVE
      now dynamically may or may not need to be handled at these points
      depending on the kernel configuration, hardware features discovered
      at boot, and the runtime state of the task.  To make these
      decisions as fast as possible, const cpucaps are used where
      feasible, via the system_supports_sve() helper.
      
      The SVE registers are only tracked for threads that have explicitly
      used SVE, indicated by the new thread flag TIF_SVE.  Otherwise, the
      FPSIMD view of the architectural state is stored in
      thread.fpsimd_state as usual.
      
      When in use, the SVE registers are not stored directly in
      thread_struct due to their potentially large and variable size.
      Because the task_struct slab allocator must be configured very
      early during kernel boot, it is also tricky to configure it
      correctly to match the maximum vector length provided by the
      hardware, since this depends on examining secondary CPUs as well as
      the primary.  Instead, a pointer sve_state in thread_struct points
      to a dynamically allocated buffer containing the SVE register data,
      and code is added to allocate and free this buffer at appropriate
      times.
      
      TIF_SVE is set when taking an SVE access trap from userspace, if
      suitable hardware support has been detected.  This enables SVE for
      the thread: a subsequent return to userspace will disable the trap
      accordingly.  If such a trap is taken without sufficient system-
      wide hardware support, SIGILL is sent to the thread instead as if
      an undefined instruction had been executed: this may happen if
      userspace tries to use SVE in a system where not all CPUs support
      it for example.
      
      The kernel will clear TIF_SVE and disable SVE for the thread
      whenever an explicit syscall is made by userspace.  For backwards
      compatibility reasons and conformance with the spirit of the base
      AArch64 procedure call standard, the subset of the SVE register
      state that aliases the FPSIMD registers is still preserved across a
      syscall even if this happens.  The remainder of the SVE register
      state logically becomes zero at syscall entry, though the actual
      zeroing work is currently deferred until the thread next tries to
      use SVE, causing another trap to the kernel.  This implementation
      is suboptimal: in the future, the fastpath case may be optimised
      to zero the registers in-place and leave SVE enabled for the task,
      where beneficial.
      
      TIF_SVE is also cleared in the following slowpath cases, which are
      taken as reasonable hints that the task may no longer use SVE:
       * exec
       * fork and clone
      
      Code is added to sync data between thread.fpsimd_state and
      thread.sve_state whenever enabling/disabling SVE, in a manner
      consistent with the SVE architectural programmer's model.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Alex Bennée <alex.bennee@linaro.org>
      [will: added #include to fix allnoconfig build]
      [will: use enable_daif in do_sve_acc]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bc0ee476