1. 07 2月, 2018 8 次提交
  2. 24 1月, 2018 3 次提交
  3. 17 1月, 2018 1 次提交
    • C
      arm64: kpti: Fix the interaction between ASID switching and software PAN · 6b88a32c
      Catalin Marinas 提交于
      With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the
      active ASID to decide whether user access was enabled (non-zero ASID)
      when the exception was taken. On return from exception, if user access
      was previously disabled, it re-instates TTBR0_EL1 from the per-thread
      saved value (updated in switch_mm() or efi_set_pgd()).
      
      Commit 7655abb9 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a
      TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e7
      ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the
      __uaccess_ttbr0_disable() function and asm macro to first write the
      reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an
      exception occurs between these two, the exception return code will
      re-instate a valid TTBR0_EL1. Similar scenario can happen in
      cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID
      update in cpu_do_switch_mm().
      
      This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and
      disables the interrupts around the TTBR0_EL1 and ASID switching code in
      __uaccess_ttbr0_disable(). It also ensures that, when returning from the
      EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in
      TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}.
      
      The accesses to current_thread_info()->ttbr0 are updated to use
      READ_ONCE/WRITE_ONCE.
      
      As a safety measure, __uaccess_ttbr0_enable() always masks out any
      existing non-zero ASID TTBR1_EL1 before writing in the new ASID.
      
      Fixes: 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN")
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6b88a32c
  4. 16 1月, 2018 9 次提交
    • J
      KVM: arm64: Handle RAS SErrors from EL2 on guest exit · 0067df41
      James Morse 提交于
      We expect to have firmware-first handling of RAS SErrors, with errors
      notified via an APEI method. For systems without firmware-first, add
      some minimal handling to KVM.
      
      There are two ways KVM can take an SError due to a guest, either may be a
      RAS error: we exit the guest due to an SError routed to EL2 by HCR_EL2.AMO,
      or we take an SError from EL2 when we unmask PSTATE.A from __guest_exit.
      
      The current SError from EL2 code unmasks SError and tries to fence any
      pending SError into a single instruction window. It then leaves SError
      unmasked.
      
      With the v8.2 RAS Extensions we may take an SError for a 'corrected'
      error, but KVM is only able to handle SError from EL2 if they occur
      during this single instruction window...
      
      The RAS Extensions give us a new instruction to synchronise and
      consume SErrors. The RAS Extensions document (ARM DDI0587),
      '2.4.1 ESB and Unrecoverable errors' describes ESB as synchronising
      SError interrupts generated by 'instructions, translation table walks,
      hardware updates to the translation tables, and instruction fetches on
      the same PE'. This makes ESB equivalent to KVMs existing
      'dsb, mrs-daifclr, isb' sequence.
      
      Use the alternatives to synchronise and consume any SError using ESB
      instead of unmasking and taking the SError. Set ARM_EXIT_WITH_SERROR_BIT
      in the exit_code so that we can restart the vcpu if it turns out this
      SError has no impact on the vcpu.
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0067df41
    • J
      arm64: kernel: Prepare for a DISR user · 68ddbf09
      James Morse 提交于
      KVM would like to consume any pending SError (or RAS error) after guest
      exit. Today it has to unmask SError and use dsb+isb to synchronise the
      CPU. With the RAS extensions we can use ESB to synchronise any pending
      SError.
      
      Add the necessary macros to allow DISR to be read and converted to an
      ESR.
      
      We clear the DISR register when we enable the RAS cpufeature, and the
      kernel has not executed any ESB instructions. Any value we find in DISR
      must have belonged to firmware. Executing an ESB instruction is the
      only way to update DISR, so we can expect firmware to have handled
      any deferred SError. By the same logic we clear DISR in the idle path.
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      68ddbf09
    • J
      arm64: kernel: Survive corrected RAS errors notified by SError · 6bf0dcfd
      James Morse 提交于
      Prior to v8.2, SError is an uncontainable fatal exception. The v8.2 RAS
      extensions use SError to notify software about RAS errors, these can be
      contained by the Error Syncronization Barrier.
      
      An ACPI system with firmware-first may use SError as its 'SEI'
      notification. Future patches may add code to 'claim' this SError as a
      notification.
      
      Other systems can distinguish these RAS errors from the SError ESR and
      use the AET bits and additional data from RAS-Error registers to handle
      the error. Future patches may add this kernel-first handling.
      
      Without support for either of these we will panic(), even if we received
      a corrected error. Add code to decode the severity of RAS errors. We can
      safely ignore contained errors where the CPU can continue to make
      progress. For all other errors we continue to panic().
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6bf0dcfd
    • X
      arm64: cpufeature: Detect CPU RAS Extentions · 64c02720
      Xie XiuQi 提交于
      ARM's v8.2 Extentions add support for Reliability, Availability and
      Serviceability (RAS). On CPUs with these extensions system software
      can use additional barriers to isolate errors and determine if faults
      are pending. Add cpufeature detection.
      
      Platform level RAS support may require additional firmware support.
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com>
      [Rebased added config option, reworded commit message]
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      64c02720
    • J
      arm64: sysreg: Move to use definitions for all the SCTLR bits · 7a00d68e
      James Morse 提交于
      __cpu_setup() configures SCTLR_EL1 using some hard coded hex masks,
      and el2_setup() duplicates some this when setting RES1 bits.
      
      Lets make this the same as KVM's hyp_init, which uses named bits.
      
      First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0}
      bits, and those we want to set or clear.
      
      Add a build_bug checks to ensures all bits are either set or clear.
      This means we don't need to preserve endian-ness configuration
      generated elsewhere.
      
      Finally, move the head.S and proc.S users of these hard-coded masks
      over to the macro versions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7a00d68e
    • J
      arm64: cpufeature: __this_cpu_has_cap() shouldn't stop early · edf298cf
      James Morse 提交于
      this_cpu_has_cap() tests caps->desc not caps->matches, so it stops
      walking the list when it finds a 'silent' feature, instead of
      walking to the end of the list.
      
      Prior to v4.6's 644c2ae1 ("arm64: cpufeature: Test 'matches' pointer
      to find the end of the list") we always tested desc to find the end of
      a capability list. This was changed for dubious things like PAN_NOT_UAO.
      v4.7's e3661b12 ("arm64: Allow a capability to be checked on
      single CPU") added this_cpu_has_cap() using the old desc style test.
      
      CC: Suzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      edf298cf
    • D
      arm64: fpsimd: Fix state leakage when migrating after sigreturn · 0abdeff5
      Dave Martin 提交于
      When refactoring the sigreturn code to handle SVE, I changed the
      sigreturn implementation to store the new FPSIMD state from the
      user sigframe into task_struct before reloading the state into the
      CPU regs.  This makes it easier to convert the data for SVE when
      needed.
      
      However, it turns out that the fpsimd_state structure passed into
      fpsimd_update_current_state is not fully initialised, so assigning
      the structure as a whole corrupts current->thread.fpsimd_state.cpu
      with uninitialised data.
      
      This means that if the garbage data written to .cpu happens to be a
      valid cpu number, and the task is subsequently migrated to the cpu
      identified by the that number, and then tries to enter userspace,
      the CPU FPSIMD regs will be assumed to be correct for the task and
      not reloaded as they should be.  This can result in returning to
      userspace with the FPSIMD registers containing data that is stale or
      that belongs to another task or to the kernel.
      
      Knowingly handing around a kernel structure that is incompletely
      initialised with user data is a potential source of mistakes,
      especially across source file boundaries.  To help avoid a repeat
      of this issue, this patch adapts the relevant internal API to hand
      around the user-accessible subset only: struct user_fpsimd_state.
      
      To avoid future surprises, this patch also converts all uses of
      struct fpsimd_state that really only access the user subset, to use
      struct user_fpsimd_state.  A few missing consts are added to
      function prototypes for good measure.
      
      Thanks to Will for spotting the cause of the bug here.
      Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0abdeff5
    • S
      arm64: Inform user if software PAN is in use · 894cfd14
      Stephen Boyd 提交于
      It isn't entirely obvious if we're using software PAN because we
      don't say anything about it in the boot log. But if we're using
      hardware PAN we'll print a nice CPU feature message indicating
      it. Add a print for software PAN too so we know if it's being
      used or not.
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      894cfd14
    • S
      arm64: capabilities: Handle duplicate entries for a capability · 67948af4
      Suzuki K Poulose 提交于
      Sometimes a single capability could be listed multiple times with
      differing matches(), e.g, CPU errata for different MIDR versions.
      This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
      we stop checking for a capability on a CPU with the first
      entry in the given table, which is not sufficient. Make sure we
      run the checks for all entries of the same capability. We do
      this by fixing __this_cpu_has_cap() to run through all the
      entries in the given table for a match and reuse it for
      verify_local_cpu_feature().
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      67948af4
  5. 15 1月, 2018 6 次提交
  6. 13 1月, 2018 4 次提交
  7. 09 1月, 2018 7 次提交
  8. 05 1月, 2018 1 次提交
  9. 03 1月, 2018 1 次提交