1. 21 10月, 2021 1 次提交
    • M
      arm64: factor out GPR numbering helpers · 8ed1b498
      Mark Rutland 提交于
      In <asm/sysreg.h> we have macros to convert the names of general purpose
      registers (GPRs) into integer constants, which we use to manually build
      the encoding for `MRS` and `MSR` instructions where we can't rely on the
      assembler to do so for us.
      
      In subsequent patches we'll need to map the same GPR names to integer
      constants so that we can use this to build metadata for exception
      fixups.
      
      So that the we can use the mappings elsewhere, factor out the
      definitions into a new <asm/gpr-num.h> header, renaming the definitions
      to align with this "GPR num" naming for clarity.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20211019160219.5202-6-mark.rutland@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      8ed1b498
  2. 20 8月, 2021 2 次提交
  3. 18 8月, 2021 1 次提交
  4. 11 8月, 2021 1 次提交
  5. 03 8月, 2021 2 次提交
    • Y
      arm64/cpufeature: Optionally disable MTE via command-line · 7a062ce3
      Yee Lee 提交于
      MTE support needs to be optionally disabled in runtime
      for HW issue workaround, FW development and some
      evaluation works on system resource and performance.
      
      This patch makes two changes:
      (1) moves init of tag-allocation bits(ATA/ATA0) to
      cpu_enable_mte() as not cached in TLB.
      
      (2) allows ID_AA64PFR1_EL1.MTE to be overridden on
      its shadow value by giving "arm64.nomte" on cmdline.
      
      When the feature value is off, ATA and TCF will not set
      and the related functionalities are accordingly suppressed.
      Suggested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Suggested-by: NMarc Zyngier <maz@kernel.org>
      Suggested-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NYee Lee <yee.lee@mediatek.com>
      Link: https://lore.kernel.org/r/20210803070824.7586-2-yee.lee@mediatek.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7a062ce3
    • M
      arm64: kasan: mte: use a constant kernel GCR_EL1 value · 82868247
      Mark Rutland 提交于
      When KASAN_HW_TAGS is selected, KASAN is enabled at boot time, and the
      hardware supports MTE, we'll initialize `kernel_gcr_excl` with a value
      dependent on KASAN_TAG_MAX. While the resulting value is a constant
      which depends on KASAN_TAG_MAX, we have to perform some runtime work to
      generate the value, and have to read the value from memory during the
      exception entry path. It would be better if we could generate this as a
      constant at compile-time, and use it as such directly.
      
      Early in boot within __cpu_setup(), we initialize GCR_EL1 to a safe
      value, and later override this with the value required by KASAN. If
      CONFIG_KASAN_HW_TAGS is not selected, or if KASAN is disabeld at boot
      time, the kernel will not use IRG instructions, and so the initial value
      of GCR_EL1 is does not matter to the kernel. Thus, we can instead have
      __cpu_setup() initialize GCR_EL1 to a value consistent with
      KASAN_TAG_MAX, and avoid the need to re-initialize it during hotplug and
      resume form suspend.
      
      This patch makes arem64 use a compile-time constant KERNEL_GCR_EL1
      value, which is compatible with KASAN_HW_TAGS when this is selected.
      This removes the need to re-initialize GCR_EL1 dynamically, and acts as
      an optimization to the entry assembly, which no longer needs to load
      this value from memory. The redundant initialization hooks are removed.
      
      In order to do this, KASAN_TAG_MAX needs to be visible outside of the
      core KASAN code. To do this, I've moved the KASAN_TAG_* values into
      <linux/kasan-tags.h>.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Peter Collingbourne <pcc@google.com>
      Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Tested-by: NAndrey Konovalov <andreyknvl@gmail.com>
      Link: https://lore.kernel.org/r/20210714143843.56537-3-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      82868247
  6. 30 7月, 2021 1 次提交
  7. 22 6月, 2021 1 次提交
  8. 02 6月, 2021 2 次提交
  9. 14 4月, 2021 1 次提交
    • P
      arm64: Introduce prctl(PR_PAC_{SET,GET}_ENABLED_KEYS) · 20169862
      Peter Collingbourne 提交于
      This change introduces a prctl that allows the user program to control
      which PAC keys are enabled in a particular task. The main reason
      why this is useful is to enable a userspace ABI that uses PAC to
      sign and authenticate function pointers and other pointers exposed
      outside of the function, while still allowing binaries conforming
      to the ABI to interoperate with legacy binaries that do not sign or
      authenticate pointers.
      
      The idea is that a dynamic loader or early startup code would issue
      this prctl very early after establishing that a process may load legacy
      binaries, but before executing any PAC instructions.
      
      This change adds a small amount of overhead to kernel entry and exit
      due to additional required instruction sequences.
      
      On a DragonBoard 845c (Cortex-A75) with the powersave governor, the
      overhead of similar instruction sequences was measured as 4.9ns when
      simulating the common case where IA is left enabled, or 43.7ns when
      simulating the uncommon case where IA is disabled. These numbers can
      be seen as the worst case scenario, since in more realistic scenarios
      a better performing governor would be used and a newer chip would be
      used that would support PAC unlike Cortex-A75 and would be expected
      to be faster than Cortex-A75.
      
      On an Apple M1 under a hypervisor, the overhead of the entry/exit
      instruction sequences introduced by this patch was measured as 0.3ns
      in the case where IA is left enabled, and 33.0ns in the case where
      IA is disabled.
      Signed-off-by: NPeter Collingbourne <pcc@google.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5
      Link: https://lore.kernel.org/r/d6609065f8f40397a4124654eb68c9f490b4d477.1616123271.git.pcc@google.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      20169862
  10. 11 4月, 2021 1 次提交
  11. 09 4月, 2021 1 次提交
  12. 08 4月, 2021 1 次提交
  13. 06 4月, 2021 1 次提交
  14. 26 3月, 2021 1 次提交
  15. 18 3月, 2021 2 次提交
  16. 10 3月, 2021 1 次提交
  17. 05 2月, 2021 1 次提交
  18. 03 2月, 2021 1 次提交
    • M
      KVM: arm64: Upgrade PMU support to ARMv8.4 · 46081078
      Marc Zyngier 提交于
      Upgrading the PMU code from ARMv8.1 to ARMv8.4 turns out to be
      pretty easy. All that is required is support for PMMIR_EL1, which
      is read-only, and for which returning 0 is a valid option as long
      as we don't advertise STALL_SLOT as an implemented event.
      
      Let's just do that and adjust what we return to the guest.
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      46081078
  19. 21 1月, 2021 1 次提交
  20. 03 12月, 2020 2 次提交
  21. 01 12月, 2020 1 次提交
  22. 27 11月, 2020 1 次提交
  23. 13 11月, 2020 1 次提交
  24. 29 10月, 2020 1 次提交
  25. 28 9月, 2020 1 次提交
  26. 14 9月, 2020 1 次提交
    • A
      arm64: ptrauth: Introduce Armv8.3 pointer authentication enhancements · e16aeb07
      Amit Daniel Kachhap 提交于
      Some Armv8.3 Pointer Authentication enhancements have been introduced
      which are mandatory for Armv8.6 and optional for Armv8.3. These features
      are,
      
      * ARMv8.3-PAuth2 - An enhanced PAC generation logic is added which hardens
        finding the correct PAC value of the authenticated pointer.
      
      * ARMv8.3-FPAC - Fault is generated now when the ptrauth authentication
        instruction fails in authenticating the PAC present in the address.
        This is different from earlier case when such failures just adds an
        error code in the top byte and waits for subsequent load/store to abort.
        The ptrauth instructions which may cause this fault are autiasp, retaa
        etc.
      
      The above features are now represented by additional configurations
      for the Address Authentication cpufeature and a new ESR exception class.
      
      The userspace fault received in the kernel due to ARMv8.3-FPAC is treated
      as Illegal instruction and hence signal SIGILL is injected with ILL_ILLOPN
      as the signal code. Note that this is different from earlier ARMv8.3
      ptrauth where signal SIGSEGV is issued due to Pointer authentication
      failures. The in-kernel PAC fault causes kernel to crash.
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Link: https://lore.kernel.org/r/20200914083656.21428-4-amit.kachhap@arm.comSigned-off-by: NWill Deacon <will@kernel.org>
      e16aeb07
  27. 04 9月, 2020 3 次提交
  28. 22 7月, 2020 1 次提交
  29. 15 7月, 2020 1 次提交
  30. 07 7月, 2020 1 次提交
  31. 03 7月, 2020 3 次提交