1. 25 2月, 2022 1 次提交
  2. 20 8月, 2021 1 次提交
  3. 18 8月, 2021 1 次提交
  4. 11 8月, 2021 1 次提交
  5. 30 7月, 2021 1 次提交
  6. 11 6月, 2021 1 次提交
  7. 27 5月, 2021 1 次提交
  8. 09 4月, 2021 1 次提交
  9. 25 3月, 2021 1 次提交
  10. 19 3月, 2021 1 次提交
  11. 09 2月, 2021 5 次提交
  12. 03 12月, 2020 1 次提交
    • M
      arm64: sdei: explicitly simulate PAN/UAO entry · 2376e75c
      Mark Rutland 提交于
      In preparation for removing addr_limit and set_fs() we must decouple the
      SDEI PAN/UAO manipulation from the uaccess code, and explicitly
      reinitialize these as required.
      
      SDEI enters the kernel with a non-architectural exception, and prior to
      the most recent revision of the specification (ARM DEN 0054B), PSTATE
      bits (e.g. PAN, UAO) are not manipulated in the same way as for
      architectural exceptions. Notably, older versions of the spec can be
      read ambiguously as to whether PSTATE bits are inherited unchanged from
      the interrupted context or whether they are generated from scratch, with
      TF-A doing the latter.
      
      We have three cases to consider:
      
      1) The existing TF-A implementation of SDEI will clear PAN and clear UAO
         (along with other bits in PSTATE) when delivering an SDEI exception.
      
      2) In theory, implementations of SDEI prior to revision B could inherit
         PAN and UAO (along with other bits in PSTATE) unchanged from the
         interrupted context. However, in practice such implementations do not
         exist.
      
      3) Going forward, new implementations of SDEI must clear UAO, and
         depending on SCTLR_ELx.SPAN must either inherit or set PAN.
      
      As we can ignore (2) we can assume that upon SDEI entry, UAO is always
      clear, though PAN may be clear, inherited, or set per SCTLR_ELx.SPAN.
      Therefore, we must explicitly initialize PAN, but do not need to do
      anything for UAO.
      
      Considering what we need to do:
      
      * When set_fs() is removed, force_uaccess_begin() will have no HW
        side-effects. As this only clears UAO, which we can assume has already
        been cleared upon entry, this is not a problem. We do not need to add
        code to manipulate UAO explicitly.
      
      * PAN may be cleared upon entry (in case 1 above), so where a kernel is
        built to use PAN and this is supported by all CPUs, the kernel must
        set PAN upon entry to ensure expected behaviour.
      
      * PAN may be inherited from the interrupted context (in case 3 above),
        and so where a kernel is not built to use PAN or where PAN support is
        not uniform across CPUs, the kernel must clear PAN to ensure expected
        behaviour.
      
      This patch reworks the SDEI code accordingly, explicitly setting PAN to
      the expected state in all cases. To cater for the cases where the kernel
      does not use PAN or this is not uniformly supported by hardware we add a
      new cpu_has_pan() helper which can be used regardless of whether the
      kernel is built to use PAN.
      
      The existing system_uses_ttbr0_pan() is redefined in terms of
      system_uses_hw_pan() both for clarity and as a minor optimization when
      HW PAN is not selected.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-3-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2376e75c
  13. 28 11月, 2020 1 次提交
  14. 14 11月, 2020 2 次提交
  15. 10 11月, 2020 1 次提交
  16. 30 10月, 2020 2 次提交
  17. 29 9月, 2020 3 次提交
    • M
      arm64: Get rid of arm64_ssbd_state · 31c84d6c
      Marc Zyngier 提交于
      Out with the old ghost, in with the new...
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      31c84d6c
    • W
      arm64: Rewrite Spectre-v2 mitigation code · d4647f0a
      Will Deacon 提交于
      The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain.
      This is largely due to it being written hastily, without much clue as to
      how things would pan out, and also because it ends up mixing policy and
      state in such a way that it is very difficult to figure out what's going
      on.
      
      Rewrite the Spectre-v2 mitigation so that it clearly separates state from
      policy and follows a more structured approach to handling the mitigation.
      Signed-off-by: NWill Deacon <will@kernel.org>
      d4647f0a
    • W
      arm64: Remove Spectre-related CONFIG_* options · 6e5f0927
      Will Deacon 提交于
      The spectre mitigations are too configurable for their own good, leading
      to confusing logic trying to figure out when we should mitigate and when
      we shouldn't. Although the plethora of command-line options need to stick
      around for backwards compatibility, the default-on CONFIG options that
      depend on EXPERT can be dropped, as the mitigations only do anything if
      the system is vulnerable, a mitigation is available and the command-line
      hasn't disabled it.
      
      Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of
      enabling this code unconditionally.
      Signed-off-by: NWill Deacon <will@kernel.org>
      6e5f0927
  18. 07 9月, 2020 1 次提交
  19. 04 9月, 2020 1 次提交
  20. 16 7月, 2020 1 次提交
    • Z
      arm64: tlb: Use the TLBI RANGE feature in arm64 · d1d3aa98
      Zhenyu Ye 提交于
      Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range().
      
      When cpu supports TLBI feature, the minimum range granularity is
      decided by 'scale', so we can not flush all pages by one instruction
      in some cases.
      
      For example, when the pages = 0xe81a, let's start 'scale' from
      maximum, and find right 'num' for each 'scale':
      
      1. scale = 3, we can flush no pages because the minimum range is
         2^(5*3 + 1) = 0x10000.
      2. scale = 2, the minimum range is 2^(5*2 + 1) = 0x800, we can
         flush 0xe800 pages this time, the num = 0xe800/0x800 - 1 = 0x1c.
         Remaining pages is 0x1a;
      3. scale = 1, the minimum range is 2^(5*1 + 1) = 0x40, no page
         can be flushed.
      4. scale = 0, we flush the remaining 0x1a pages, the num =
         0x1a/0x2 - 1 = 0xd.
      
      However, in most scenarios, the pages = 1 when flush_tlb_range() is
      called. Start from scale = 3 or other proper value (such as scale =
      ilog2(pages)), will incur extra overhead.
      So increase 'scale' from 0 to maximum, the flush order is exactly
      opposite to the example.
      Signed-off-by: NZhenyu Ye <yezhenyu2@huawei.com>
      Link: https://lore.kernel.org/r/20200715071945.897-4-yezhenyu2@huawei.com
      [catalin.marinas@arm.com: removed unnecessary masks in __TLBI_VADDR_RANGE]
      [catalin.marinas@arm.com: __TLB_RANGE_NUM subtracts 1]
      [catalin.marinas@arm.com: minor adjustments to the comments]
      [catalin.marinas@arm.com: introduce system_supports_tlb_range()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d1d3aa98
  21. 02 7月, 2020 1 次提交
  22. 22 6月, 2020 1 次提交
    • A
      KVM: arm64: Annotate hyp NMI-related functions as __always_inline · 7733306b
      Alexandru Elisei 提交于
      The "inline" keyword is a hint for the compiler to inline a function.  The
      functions system_uses_irq_prio_masking() and gic_write_pmr() are used by
      the code running at EL2 on a non-VHE system, so mark them as
      __always_inline to make sure they'll always be part of the .hyp.text
      section.
      
      This fixes the following splat when trying to run a VM:
      
      [   47.625273] Kernel panic - not syncing: HYP panic:
      [   47.625273] PS:a00003c9 PC:0000ca0b42049fc4 ESR:86000006
      [   47.625273] FAR:0000ca0b42049fc4 HPFAR:0000000010001000 PAR:0000000000000000
      [   47.625273] VCPU:0000000000000000
      [   47.647261] CPU: 1 PID: 217 Comm: kvm-vcpu-0 Not tainted 5.8.0-rc1-ARCH+ #61
      [   47.654508] Hardware name: Globalscale Marvell ESPRESSOBin Board (DT)
      [   47.661139] Call trace:
      [   47.663659]  dump_backtrace+0x0/0x1cc
      [   47.667413]  show_stack+0x18/0x24
      [   47.670822]  dump_stack+0xb8/0x108
      [   47.674312]  panic+0x124/0x2f4
      [   47.677446]  panic+0x0/0x2f4
      [   47.680407] SMP: stopping secondary CPUs
      [   47.684439] Kernel Offset: disabled
      [   47.688018] CPU features: 0x240402,20002008
      [   47.692318] Memory Limit: none
      [   47.695465] ---[ end Kernel panic - not syncing: HYP panic:
      [   47.695465] PS:a00003c9 PC:0000ca0b42049fc4 ESR:86000006
      [   47.695465] FAR:0000ca0b42049fc4 HPFAR:0000000010001000 PAR:0000000000000000
      [   47.695465] VCPU:0000000000000000 ]---
      
      The instruction abort was caused by the code running at EL2 trying to fetch
      an instruction which wasn't mapped in the EL2 translation tables. Using
      objdump showed the two functions as separate symbols in the .text section.
      
      Fixes: 85738e05 ("arm64: kvm: Unmask PMR before entering guest")
      Cc: stable@vger.kernel.org
      Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NJames Morse <james.morse@arm.com>
      Link: https://lore.kernel.org/r/20200618171254.1596055-1-alexandru.elisei@arm.com
      7733306b
  23. 20 5月, 2020 1 次提交
  24. 28 4月, 2020 1 次提交
  25. 18 3月, 2020 5 次提交
  26. 17 3月, 2020 1 次提交
    • D
      arm64: Basic Branch Target Identification support · 8ef8f360
      Dave Martin 提交于
      This patch adds the bare minimum required to expose the ARMv8.5
      Branch Target Identification feature to userspace.
      
      By itself, this does _not_ automatically enable BTI for any initial
      executable pages mapped by execve().  This will come later, but for
      now it should be possible to enable BTI manually on those pages by
      using mprotect() from within the target process.
      
      Other arches already using the generic mman.h are already using
      0x10 for arch-specific prot flags, so we use that for PROT_BTI
      here.
      
      For consistency, signal handler entry points in BTI guarded pages
      are required to be annotated as such, just like any other function.
      This blocks a relatively minor attack vector, but comforming
      userspace will have the annotations anyway, so we may as well
      enforce them.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8ef8f360
  27. 14 3月, 2020 1 次提交
    • M
      arm64: cpufeature: add cpus_have_final_cap() · 1db5cdec
      Mark Rutland 提交于
      When cpus_have_const_cap() was originally introduced it was intended to
      be safe in hyp context, where it is not safe to access the cpu_hwcaps
      array as cpus_have_cap() did. For more details see commit:
      
        a4023f68 ("arm64: Add hypervisor safe helper for checking constant capabilities")
      
      We then made use of cpus_have_const_cap() throughout the kernel.
      
      Subsequently, we had to defer updating the static_key associated with
      each capability in order to avoid lockdep complaints. To avoid breaking
      kernel-wide usage of cpus_have_const_cap(), this was updated to fall
      back to the cpu_hwcaps array if called before the static_keys were
      updated. As the kvm hyp code was only called later than this, the
      fallback is redundant but not functionally harmful. For more details,
      see commit:
      
        63a1e1c9 ("arm64/cpufeature: don't use mutex in bringup path")
      
      Today we have more users of cpus_have_const_cap() which are only called
      once the relevant static keys are initialized, and it would be
      beneficial to avoid the redundant code.
      
      To that end, this patch adds a new cpus_have_final_cap(), helper which
      is intend to be used in code which is only run once capabilities have
      been finalized, and will never check the cpus_hwcap array. This helps
      the compiler to generate better code as it no longer needs to generate
      code to address and test the cpus_hwcap array. To help catch misuse,
      cpus_have_final_cap() will BUG() if called before capabilities are
      finalized.
      
      In hyp context, BUG() will result in a hyp panic, but the specific BUG()
      instance will not be identified in the usual way.
      
      Comments are added to the various cpus_have_*_cap() helpers to describe
      the constraints on when they can be used. For clarity cpus_have_cap() is
      moved above the other helpers. Similarly the helpers are updated to use
      system_capabilities_finalized() consistently, and this is made
      __always_inline as required by its new callers.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1db5cdec
  28. 07 3月, 2020 1 次提交