1. 30 10月, 2020 1 次提交
  2. 29 9月, 2020 3 次提交
    • M
      arm64: Get rid of arm64_ssbd_state · 31c84d6c
      Marc Zyngier 提交于
      Out with the old ghost, in with the new...
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      31c84d6c
    • W
      arm64: Rewrite Spectre-v2 mitigation code · d4647f0a
      Will Deacon 提交于
      The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain.
      This is largely due to it being written hastily, without much clue as to
      how things would pan out, and also because it ends up mixing policy and
      state in such a way that it is very difficult to figure out what's going
      on.
      
      Rewrite the Spectre-v2 mitigation so that it clearly separates state from
      policy and follows a more structured approach to handling the mitigation.
      Signed-off-by: NWill Deacon <will@kernel.org>
      d4647f0a
    • W
      arm64: Remove Spectre-related CONFIG_* options · 6e5f0927
      Will Deacon 提交于
      The spectre mitigations are too configurable for their own good, leading
      to confusing logic trying to figure out when we should mitigate and when
      we shouldn't. Although the plethora of command-line options need to stick
      around for backwards compatibility, the default-on CONFIG options that
      depend on EXPERT can be dropped, as the mitigations only do anything if
      the system is vulnerable, a mitigation is available and the command-line
      hasn't disabled it.
      
      Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of
      enabling this code unconditionally.
      Signed-off-by: NWill Deacon <will@kernel.org>
      6e5f0927
  3. 16 7月, 2020 1 次提交
    • Z
      arm64: tlb: Use the TLBI RANGE feature in arm64 · d1d3aa98
      Zhenyu Ye 提交于
      Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range().
      
      When cpu supports TLBI feature, the minimum range granularity is
      decided by 'scale', so we can not flush all pages by one instruction
      in some cases.
      
      For example, when the pages = 0xe81a, let's start 'scale' from
      maximum, and find right 'num' for each 'scale':
      
      1. scale = 3, we can flush no pages because the minimum range is
         2^(5*3 + 1) = 0x10000.
      2. scale = 2, the minimum range is 2^(5*2 + 1) = 0x800, we can
         flush 0xe800 pages this time, the num = 0xe800/0x800 - 1 = 0x1c.
         Remaining pages is 0x1a;
      3. scale = 1, the minimum range is 2^(5*1 + 1) = 0x40, no page
         can be flushed.
      4. scale = 0, we flush the remaining 0x1a pages, the num =
         0x1a/0x2 - 1 = 0xd.
      
      However, in most scenarios, the pages = 1 when flush_tlb_range() is
      called. Start from scale = 3 or other proper value (such as scale =
      ilog2(pages)), will incur extra overhead.
      So increase 'scale' from 0 to maximum, the flush order is exactly
      opposite to the example.
      Signed-off-by: NZhenyu Ye <yezhenyu2@huawei.com>
      Link: https://lore.kernel.org/r/20200715071945.897-4-yezhenyu2@huawei.com
      [catalin.marinas@arm.com: removed unnecessary masks in __TLBI_VADDR_RANGE]
      [catalin.marinas@arm.com: __TLB_RANGE_NUM subtracts 1]
      [catalin.marinas@arm.com: minor adjustments to the comments]
      [catalin.marinas@arm.com: introduce system_supports_tlb_range()]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d1d3aa98
  4. 02 7月, 2020 1 次提交
  5. 22 6月, 2020 1 次提交
    • A
      KVM: arm64: Annotate hyp NMI-related functions as __always_inline · 7733306b
      Alexandru Elisei 提交于
      The "inline" keyword is a hint for the compiler to inline a function.  The
      functions system_uses_irq_prio_masking() and gic_write_pmr() are used by
      the code running at EL2 on a non-VHE system, so mark them as
      __always_inline to make sure they'll always be part of the .hyp.text
      section.
      
      This fixes the following splat when trying to run a VM:
      
      [   47.625273] Kernel panic - not syncing: HYP panic:
      [   47.625273] PS:a00003c9 PC:0000ca0b42049fc4 ESR:86000006
      [   47.625273] FAR:0000ca0b42049fc4 HPFAR:0000000010001000 PAR:0000000000000000
      [   47.625273] VCPU:0000000000000000
      [   47.647261] CPU: 1 PID: 217 Comm: kvm-vcpu-0 Not tainted 5.8.0-rc1-ARCH+ #61
      [   47.654508] Hardware name: Globalscale Marvell ESPRESSOBin Board (DT)
      [   47.661139] Call trace:
      [   47.663659]  dump_backtrace+0x0/0x1cc
      [   47.667413]  show_stack+0x18/0x24
      [   47.670822]  dump_stack+0xb8/0x108
      [   47.674312]  panic+0x124/0x2f4
      [   47.677446]  panic+0x0/0x2f4
      [   47.680407] SMP: stopping secondary CPUs
      [   47.684439] Kernel Offset: disabled
      [   47.688018] CPU features: 0x240402,20002008
      [   47.692318] Memory Limit: none
      [   47.695465] ---[ end Kernel panic - not syncing: HYP panic:
      [   47.695465] PS:a00003c9 PC:0000ca0b42049fc4 ESR:86000006
      [   47.695465] FAR:0000ca0b42049fc4 HPFAR:0000000010001000 PAR:0000000000000000
      [   47.695465] VCPU:0000000000000000 ]---
      
      The instruction abort was caused by the code running at EL2 trying to fetch
      an instruction which wasn't mapped in the EL2 translation tables. Using
      objdump showed the two functions as separate symbols in the .text section.
      
      Fixes: 85738e05 ("arm64: kvm: Unmask PMR before entering guest")
      Cc: stable@vger.kernel.org
      Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Acked-by: NJames Morse <james.morse@arm.com>
      Link: https://lore.kernel.org/r/20200618171254.1596055-1-alexandru.elisei@arm.com
      7733306b
  6. 20 5月, 2020 1 次提交
  7. 28 4月, 2020 1 次提交
  8. 18 3月, 2020 5 次提交
  9. 17 3月, 2020 1 次提交
    • D
      arm64: Basic Branch Target Identification support · 8ef8f360
      Dave Martin 提交于
      This patch adds the bare minimum required to expose the ARMv8.5
      Branch Target Identification feature to userspace.
      
      By itself, this does _not_ automatically enable BTI for any initial
      executable pages mapped by execve().  This will come later, but for
      now it should be possible to enable BTI manually on those pages by
      using mprotect() from within the target process.
      
      Other arches already using the generic mman.h are already using
      0x10 for arch-specific prot flags, so we use that for PROT_BTI
      here.
      
      For consistency, signal handler entry points in BTI guarded pages
      are required to be annotated as such, just like any other function.
      This blocks a relatively minor attack vector, but comforming
      userspace will have the annotations anyway, so we may as well
      enforce them.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8ef8f360
  10. 14 3月, 2020 1 次提交
    • M
      arm64: cpufeature: add cpus_have_final_cap() · 1db5cdec
      Mark Rutland 提交于
      When cpus_have_const_cap() was originally introduced it was intended to
      be safe in hyp context, where it is not safe to access the cpu_hwcaps
      array as cpus_have_cap() did. For more details see commit:
      
        a4023f68 ("arm64: Add hypervisor safe helper for checking constant capabilities")
      
      We then made use of cpus_have_const_cap() throughout the kernel.
      
      Subsequently, we had to defer updating the static_key associated with
      each capability in order to avoid lockdep complaints. To avoid breaking
      kernel-wide usage of cpus_have_const_cap(), this was updated to fall
      back to the cpu_hwcaps array if called before the static_keys were
      updated. As the kvm hyp code was only called later than this, the
      fallback is redundant but not functionally harmful. For more details,
      see commit:
      
        63a1e1c9 ("arm64/cpufeature: don't use mutex in bringup path")
      
      Today we have more users of cpus_have_const_cap() which are only called
      once the relevant static keys are initialized, and it would be
      beneficial to avoid the redundant code.
      
      To that end, this patch adds a new cpus_have_final_cap(), helper which
      is intend to be used in code which is only run once capabilities have
      been finalized, and will never check the cpus_hwcap array. This helps
      the compiler to generate better code as it no longer needs to generate
      code to address and test the cpus_hwcap array. To help catch misuse,
      cpus_have_final_cap() will BUG() if called before capabilities are
      finalized.
      
      In hyp context, BUG() will result in a hyp panic, but the specific BUG()
      instance will not be identified in the usual way.
      
      Comments are added to the various cpus_have_*_cap() helpers to describe
      the constraints on when they can be used. For clarity cpus_have_cap() is
      moved above the other helpers. Similarly the helpers are updated to use
      system_capabilities_finalized() consistently, and this is made
      __always_inline as required by its new callers.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1db5cdec
  11. 07 3月, 2020 1 次提交
  12. 22 2月, 2020 2 次提交
  13. 15 1月, 2020 1 次提交
    • S
      arm64: Introduce system_capabilities_finalized() marker · b51c6ac2
      Suzuki K Poulose 提交于
      We finalize the system wide capabilities after the SMP CPUs
      are booted by the kernel. This is used as a marker for deciding
      various checks in the kernel. e.g, sanity check the hotplugged
      CPUs for missing mandatory features.
      
      However there is no explicit helper available for this in the
      kernel. There is sys_caps_initialised, which is not exposed.
      The other closest one we have is the jump_label arm64_const_caps_ready
      which denotes that the capabilities are set and the capability checks
      could use the individual jump_labels for fast path. This is
      performed before setting the ELF Hwcaps, which must be checked
      against the new CPUs. We also perform some of the other initialization
      e.g, SVE setup, which is important for the use of FP/SIMD
      where SVE is supported. Normally userspace doesn't get to run
      before we finish this. However the in-kernel users may
      potentially start using the neon mode. So, we need to
      reject uses of neon mode before we are set. Instead of defining
      a new marker for the completion of SVE setup, we could simply
      reuse the arm64_const_caps_ready and enable it once we have
      finished all the setup. Also we could expose this to the
      various users as "system_capabilities_finalized()" to make
      it more meaningful than "const_caps_ready".
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will@kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b51c6ac2
  14. 18 10月, 2019 1 次提交
  15. 15 8月, 2019 1 次提交
  16. 05 8月, 2019 1 次提交
  17. 01 8月, 2019 1 次提交
  18. 05 7月, 2019 1 次提交
  19. 21 6月, 2019 1 次提交
  20. 19 6月, 2019 1 次提交
  21. 15 5月, 2019 1 次提交
    • M
      arm64: mark (__)cpus_have_const_cap as __always_inline · 02166b88
      Masahiro Yamada 提交于
      This prepares to move CONFIG_OPTIMIZE_INLINING from x86 to a common
      place.  We need to eliminate potential issues beforehand.
      
      If it is enabled for arm64, the following errors are reported:
      
        In file included from include/linux/compiler_types.h:68,
                         from <command-line>:
        arch/arm64/include/asm/jump_label.h: In function 'cpus_have_const_cap':
        include/linux/compiler-gcc.h:120:38: warning: asm operand 0 probably doesn't match constraints
         #define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
                                              ^~~
        arch/arm64/include/asm/jump_label.h:32:2: note: in expansion of macro 'asm_volatile_goto'
          asm_volatile_goto(
          ^~~~~~~~~~~~~~~~~
        include/linux/compiler-gcc.h:120:38: error: impossible constraint in 'asm'
         #define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
                                              ^~~
        arch/arm64/include/asm/jump_label.h:32:2: note: in expansion of macro 'asm_volatile_goto'
          asm_volatile_goto(
          ^~~~~~~~~~~~~~~~~
      
      Link: http://lkml.kernel.org/r/20190423034959.13525-3-yamada.masahiro@socionext.comSigned-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Boris Brezillon <bbrezillon@kernel.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Brian Norris <computersforpeace@gmail.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Marek Vasut <marek.vasut@gmail.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Stefan Agner <stefan@agner.ch>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      02166b88
  22. 26 4月, 2019 1 次提交
  23. 16 4月, 2019 2 次提交
    • A
      arm64: HWCAP: encapsulate elf_hwcap · aec0bff7
      Andrew Murray 提交于
      The introduction of AT_HWCAP2 introduced accessors which ensure that
      hwcap features are set and tested appropriately.
      
      Let's now mandate access to elf_hwcap via these accessors by making
      elf_hwcap static within cpufeature.c.
      Signed-off-by: NAndrew Murray <andrew.murray@arm.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      aec0bff7
    • A
      arm64: HWCAP: add support for AT_HWCAP2 · aaba098f
      Andrew Murray 提交于
      As we will exhaust the first 32 bits of AT_HWCAP let's start
      exposing AT_HWCAP2 to userspace to give us up to 64 caps.
      
      Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
      prefer to expand into AT_HWCAP2 in order to provide a consistent
      view to userspace between ILP32 and LP64. However internal to the
      kernel we prefer to continue to use the full space of elf_hwcap.
      
      To reduce complexity and allow for future expansion, we now
      represent hwcaps in the kernel as ordinals and use a
      KERNEL_HWCAP_ prefix. This allows us to support automatic feature
      based module loading for all our hwcaps.
      
      We introduce cpu_set_feature to set hwcaps which complements the
      existing cpu_have_feature helper. These helpers allow us to clean
      up existing direct uses of elf_hwcap and reduce any future effort
      required to move beyond 64 caps.
      
      For convenience we also introduce cpu_{have,set}_named_feature which
      makes use of the cpu_feature macro to allow providing a hwcap name
      without a {KERNEL_}HWCAP_ prefix.
      Signed-off-by: NAndrew Murray <andrew.murray@arm.com>
      [will: use const_ilog2() and tweak documentation]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      aaba098f
  24. 06 2月, 2019 2 次提交
  25. 14 12月, 2018 3 次提交
  26. 06 12月, 2018 2 次提交
  27. 01 10月, 2018 1 次提交
  28. 21 9月, 2018 1 次提交