1. 14 8月, 2019 4 次提交
  2. 09 8月, 2019 11 次提交
    • W
      arm64: mm: Simplify definition of virt_addr_valid() · d2d73d2f
      Will Deacon 提交于
      _virt_addr_valid() is defined as the same value in two places and rolls
      its own version of virt_to_pfn() in both cases.
      
      Consolidate these definitions by inlining a simplified version directly
      into virt_addr_valid().
      Signed-off-by: NWill Deacon <will@kernel.org>
      d2d73d2f
    • S
      arm64: mm: Remove vabits_user · 2c624fe6
      Steve Capper 提交于
      Previous patches have enabled 52-bit kernel + user VAs and there is no
      longer any scenario where user VA != kernel VA size.
      
      This patch removes the, now redundant, vabits_user variable and replaces
      usage with vabits_actual where appropriate.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      2c624fe6
    • S
      arm64: mm: Introduce 52-bit Kernel VAs · b6d00d47
      Steve Capper 提交于
      Most of the machinery is now in place to enable 52-bit kernel VAs that
      are detectable at boot time.
      
      This patch adds a Kconfig option for 52-bit user and kernel addresses
      and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
      physvirt_offset and vmemmap at early boot.
      
      To simplify things this patch also removes the 52-bit user/48-bit kernel
      kconfig option.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b6d00d47
    • S
      arm64: mm: Modify calculation of VMEMMAP_SIZE · ce3aaed8
      Steve Capper 提交于
      In a later patch we will need to have a slightly larger VMEMMAP region
      to accommodate boot time selection between 48/52-bit kernel VAs.
      
      This patch modifies the formula for computing VMEMMAP_SIZE to depend
      explicitly on the PAGE_OFFSET and start of kernel addressable memory.
      (This allows for a slightly larger direct linear map in future).
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      ce3aaed8
    • S
      arm64: mm: Separate out vmemmap · c8b6d2cc
      Steve Capper 提交于
      vmemmap is a preprocessor definition that depends on a variable,
      memstart_addr. In a later patch we will need to expand the size of
      the VMEMMAP region and optionally modify vmemmap depending upon
      whether or not hardware support is available for 52-bit virtual
      addresses.
      
      This patch changes vmemmap to be a variable. As the old definition
      depended on a variable load, this should not affect performance
      noticeably.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      c8b6d2cc
    • S
      arm64: mm: Logic to make offset_ttbr1 conditional · c812026c
      Steve Capper 提交于
      When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
      ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
      be used for both userspace and kernel.
      
      Moving on to a 52-bit kernel VA we no longer require this offset to
      ttbr1_el1 should we be running on a system with HW support for 52-bit
      VAs.
      
      This patch introduces conditional logic to offset_ttbr1 to query
      SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW
      support for 52-bit VAs then the ttbr1 offset is skipped.
      
      We choose to read a system register rather than vabits_actual because
      offset_ttbr1 can be called in places where the kernel data is not
      actually mapped.
      
      Calls to offset_ttbr1 appear to be made from rarely called code paths so
      this extra logic is not expected to adversely affect performance.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      c812026c
    • S
      arm64: mm: Introduce vabits_actual · 5383cc6e
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, one
      needs to know the actual VA_BITS detected. A new variable vabits_actual
      is introduced in this commit and employed for the KVM hypervisor layout,
      KASAN, fault handling and phys-to/from-virt translation where there
      would normally be compile time constants.
      
      In order to maintain performance in phys_to_virt, another variable
      physvirt_offset is introduced.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      5383cc6e
    • S
      arm64: mm: Introduce VA_BITS_MIN · 90ec95cd
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, the
      kernel needs to know the most conservative VA_BITS possible should it
      need to fall back to this quantity due to lack of hardware support.
      
      A new compile time constant VA_BITS_MIN is introduced in this patch and
      it is employed in the KASAN end address, KASLR, and EFI stub.
      
      For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.
      
      In other words: VA_BITS_MIN = min (48, VA_BITS)
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      90ec95cd
    • S
      arm64: kasan: Switch to using KASAN_SHADOW_OFFSET · 6bd1d0be
      Steve Capper 提交于
      KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
      line argument and affects the codegen of the inline address sanetiser.
      
      Essentially, for an example memory access:
          *ptr1 = val;
      The compiler will insert logic similar to the below:
          shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
          if (somethingWrong(shadowValue))
              flagAnError();
      
      This code sequence is inserted into many places, thus
      KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
      text.
      
      If we want to run a single kernel binary with multiple address spaces,
      then we need to do this with KASAN_SHADOW_OFFSET fixed.
      
      Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
      shadow addresses we know that the end of the shadow region is constant
      w.r.t. VA space size:
          KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET
      
      This means that if we increase the size of the VA space, the start of
      the KASAN region expands into lower addresses whilst the end of the
      KASAN region is fixed.
      
      Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
      build scripts with the VA size used as a parameter. (There are build
      time checks in the C code too to ensure that expected values are being
      derived). It is sufficient, and indeed is a simplification, to remove
      the build scripts (and build time checks) entirely and instead provide
      KASAN_SHADOW_OFFSET values.
      
      This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
      arm64 Makefile, and instead we adopt the approach used by x86 to supply
      offset values in kConfig. To help debug/develop future VA space changes,
      the Makefile logic has been preserved in a script file in the arm64
      Documentation folder.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      6bd1d0be
    • S
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper 提交于
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      14c127c9
    • S
      arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START · 9cb1c5dd
      Steve Capper 提交于
      Currently there are assumptions about the alignment of VMEMMAP_START
      and PAGE_OFFSET that won't be valid after this series is applied.
      
      These assumptions are in the form of bitwise operators being used
      instead of addition and subtraction when calculating addresses.
      
      This patch replaces these bitwise operators with addition/subtraction.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      9cb1c5dd
  3. 02 8月, 2019 1 次提交
    • M
      arm64: kprobes: Recover pstate.D in single-step exception handler · b3980e48
      Masami Hiramatsu 提交于
      kprobes manipulates the interrupted PSTATE for single step, and
      doesn't restore it. Thus, if we put a kprobe where the pstate.D
      (debug) masked, the mask will be cleared after the kprobe hits.
      
      Moreover, in the most complicated case, this can lead a kernel
      crash with below message when a nested kprobe hits.
      
      [  152.118921] Unexpected kernel single-step exception at EL1
      
      When the 1st kprobe hits, do_debug_exception() will be called.
      At this point, debug exception (= pstate.D) must be masked (=1).
      But if another kprobes hits before single-step of the first kprobe
      (e.g. inside user pre_handler), it unmask the debug exception
      (pstate.D = 0) and return.
      Then, when the 1st kprobe setting up single-step, it saves current
      DAIF, mask DAIF, enable single-step, and restore DAIF.
      However, since "D" flag in DAIF is cleared by the 2nd kprobe, the
      single-step exception happens soon after restoring DAIF.
      
      This has been introduced by commit 7419333f ("arm64: kprobe:
      Always clear pstate.D in breakpoint exception handler")
      
      To solve this issue, this stores all DAIF bits and restore it
      after single stepping.
      Reported-by: NNaresh Kamboju <naresh.kamboju@linaro.org>
      Fixes: 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler")
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b3980e48
  4. 01 8月, 2019 5 次提交
  5. 31 7月, 2019 1 次提交
  6. 25 7月, 2019 1 次提交
    • M
      treewide: add "WITH Linux-syscall-note" to SPDX tag of uapi headers · d9c52522
      Masahiro Yamada 提交于
      UAPI headers licensed under GPL are supposed to have exception
      "WITH Linux-syscall-note" so that they can be included into non-GPL
      user space application code.
      
      The exception note is missing in some UAPI headers.
      
      Some of them slipped in by the treewide conversion commit b2441318
      ("License cleanup: add SPDX GPL-2.0 license identifier to files with
      no license"). Just run:
      
        $ git show --oneline b2441318 -- arch/x86/include/uapi/asm/
      
      I believe they are not intentional, and should be fixed too.
      
      This patch was generated by the following script:
      
        git grep -l --not -e Linux-syscall-note --and -e SPDX-License-Identifier \
          -- :arch/*/include/uapi/asm/*.h :include/uapi/ :^*/Kbuild |
        while read file
        do
                sed -i -e '/[[:space:]]OR[[:space:]]/s/\(GPL-[^[:space:]]*\)/(\1 WITH Linux-syscall-note)/g' \
                -e '/[[:space:]]or[[:space:]]/s/\(GPL-[^[:space:]]*\)/(\1 WITH Linux-syscall-note)/g' \
                -e '/[[:space:]]OR[[:space:]]/!{/[[:space:]]or[[:space:]]/!s/\(GPL-[^[:space:]]*\)/\1 WITH Linux-syscall-note/g}' $file
        done
      
      After this patch is applied, there are 5 UAPI headers that do not contain
      "WITH Linux-syscall-note". They are kept untouched since this exception
      applies only to GPL variants.
      
        $ git grep --not -e Linux-syscall-note --and -e SPDX-License-Identifier \
          -- :arch/*/include/uapi/asm/*.h :include/uapi/ :^*/Kbuild
        include/uapi/drm/panfrost_drm.h:/* SPDX-License-Identifier: MIT */
        include/uapi/linux/batman_adv.h:/* SPDX-License-Identifier: MIT */
        include/uapi/linux/qemu_fw_cfg.h:/* SPDX-License-Identifier: BSD-3-Clause */
        include/uapi/linux/vbox_err.h:/* SPDX-License-Identifier: MIT */
        include/uapi/linux/virtio_iommu.h:/* SPDX-License-Identifier: BSD-3-Clause */
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d9c52522
  7. 22 7月, 2019 6 次提交
  8. 17 7月, 2019 2 次提交
  9. 13 7月, 2019 1 次提交
    • M
      arm64: switch to generic version of pte allocation · 50f11a8a
      Mike Rapoport 提交于
      The PTE allocations in arm64 are identical to the generic ones modulo the
      GFP flags.
      
      Using the generic pte_alloc_one() functions ensures that the user page
      tables are allocated with __GFP_ACCOUNT set.
      
      The arm64 definition of PGALLOC_GFP is removed and replaced with
      GFP_PGTABLE_USER for p[gum]d_alloc_one() for the user page tables and
      GFP_PGTABLE_KERNEL for the kernel page tables. The KVM memory cache is now
      using GFP_PGTABLE_USER.
      
      The mappings created with create_pgd_mapping() are now using
      GFP_PGTABLE_KERNEL.
      
      The conversion to the generic version of pte_free_kernel() removes the NULL
      check for pte.
      
      The pte_free() version on arm64 is identical to the generic one and
      can be simply dropped.
      
      [cai@lca.pw: fix a bogus GFP flag in pgd_alloc()]
        Link: https://lore.kernel.org/r/1559656836-24940-1-git-send-email-cai@lca.pw/
      [and fix it more]
        Link: https://lore.kernel.org/linux-mm/20190617151252.GF16810@rapoport-lnx/
      Link: http://lkml.kernel.org/r/1557296232-15361-5-git-send-email-rppt@linux.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Guo Ren <ren_guo@c-sky.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Sam Creasey <sammy@sammy.net>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50f11a8a
  10. 08 7月, 2019 1 次提交
    • M
      KVM: arm/arm64: Initialise host's MPIDRs by reading the actual register · 1e0cf16c
      Marc Zyngier 提交于
      As part of setting up the host context, we populate its
      MPIDR by using cpu_logical_map(). It turns out that contrary
      to arm64, cpu_logical_map() on 32bit ARM doesn't return the
      *full* MPIDR, but a truncated version.
      
      This leaves the host MPIDR slightly corrupted after the first
      run of a VM, since we won't correctly restore the MPIDR on
      exit. Oops.
      
      Since we cannot trust cpu_logical_map(), let's adopt a different
      strategy. We move the initialization of the host CPU context as
      part of the per-CPU initialization (which, in retrospect, makes
      a lot of sense), and directly read the MPIDR from the HW. This
      is guaranteed to work on both arm and arm64.
      Reported-by: NAndre Przywara <Andre.Przywara@arm.com>
      Tested-by: NAndre Przywara <Andre.Przywara@arm.com>
      Fixes: 32f13955 ("arm/arm64: KVM: Statically configure the host's view of MPIDR")
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      1e0cf16c
  11. 05 7月, 2019 6 次提交
    • D
      KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_s · fdec2a9e
      Dave Martin 提交于
      Currently, the {read,write}_sysreg_el*() accessors for accessing
      particular ELs' sysregs in the presence of VHE rely on some local
      hacks and define their system register encodings in a way that is
      inconsistent with the core definitions in <asm/sysreg.h>.
      
      As a result, it is necessary to add duplicate definitions for any
      system register that already needs a definition in sysreg.h for
      other reasons.
      
      This is a bit of a maintenance headache, and the reasons for the
      _el*() accessors working the way they do is a bit historical.
      
      This patch gets rid of the shadow sysreg definitions in
      <asm/kvm_hyp.h>, converts the _el*() accessors to use the core
      __msr_s/__mrs_s interface, and converts all call sites to use the
      standard sysreg #define names (i.e., upper case, with SYS_ prefix).
      
      This patch will conflict heavily anyway, so the opportunity
      to clean up some bad whitespace in the context of the changes is
      taken.
      
      The change exposes a few system registers that have no sysreg.h
      definition, due to msr_s/mrs_s being used in place of msr/mrs:
      additions are made in order to fill in the gaps.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christoffer Dall <christoffer.dall@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: https://www.spinics.net/lists/kvm-arm/msg31717.html
      [Rebased to v4.21-rc1]
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      [Rebased to v5.2-rc5, changelog updates]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      fdec2a9e
    • A
      KVM: arm/arm64: Add save/restore support for firmware workaround state · 99adb567
      Andre Przywara 提交于
      KVM implements the firmware interface for mitigating cache speculation
      vulnerabilities. Guests may use this interface to ensure mitigation is
      active.
      If we want to migrate such a guest to a host with a different support
      level for those workarounds, migration might need to fail, to ensure that
      critical guests don't loose their protection.
      
      Introduce a way for userland to save and restore the workarounds state.
      On restoring we do checks that make sure we don't downgrade our
      mitigation level.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Reviewed-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      99adb567
    • A
      arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests · c118bbb5
      Andre Przywara 提交于
      Recent commits added the explicit notion of "workaround not required" to
      the state of the Spectre v2 (aka. BP_HARDENING) workaround, where we
      just had "needed" and "unknown" before.
      
      Export this knowledge to the rest of the kernel and enhance the existing
      kvm_arm_harden_branch_predictor() to report this new state as well.
      Export this new state to guests when they use KVM's firmware interface
      emulation.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Reviewed-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      c118bbb5
    • J
      KVM: arm64: Consume pending SError as early as possible · 0e5b9c08
      James Morse 提交于
      On systems with v8.2 we switch the 'vaxorcism' of guest SError with an
      alternative sequence that uses the ESB-instruction, then reads DISR_EL1.
      This saves the unmasking and remasking of asynchronous exceptions.
      
      We do this after we've saved the guest registers and restored the
      host's. Any SError that becomes pending due to this will be accounted
      to the guest, when it actually occurred during host-execution.
      
      Move the ESB-instruction as early as possible. Any guest SError
      will become pending due to this ESB-instruction and then consumed to
      DISR_EL1 before the host touches anything.
      
      This lets us account for host/guest SError precisely on the guest
      exit exception boundary.
      
      Because the ESB-instruction now lands in the preamble section of
      the vectors, we need to add it to the unpatched indirect vectors
      too, and to any sequence that may be patched in over the top.
      
      The ESB-instruction always lives in the head of the vectors,
      to be before any memory write. Whereas the register-store always
      lives in the tail.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      0e5b9c08
    • J
      KVM: arm64: Abstract the size of the HYP vectors pre-amble · 3dbf100b
      James Morse 提交于
      The EL2 vector hardening feature causes KVM to generate vectors for
      each type of CPU present in the system. The generated sequences already
      do some of the early guest-exit work (i.e. saving registers). To avoid
      duplication the generated vectors branch to the original vector just
      after the preamble. This size is hard coded.
      
      Adding new instructions to the HYP vector causes strange side effects,
      which are difficult to debug as the affected code is patched in at
      runtime.
      
      Add KVM_VECTOR_PREAMBLE to tell kvm_patch_vector_branch() how big
      the preamble is. The valid_vect macro can then validate this at
      build time.
      Reviewed-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      3dbf100b
    • J
      arm64: assembler: Switch ESB-instruction with a vanilla nop if !ARM64_HAS_RAS · 2b68a2a9
      James Morse 提交于
      The ESB-instruction is a nop on CPUs that don't implement the RAS
      extensions. This lets us use it in places like the vectors without
      having to use alternatives.
      
      If someone disables CONFIG_ARM64_RAS_EXTN, this instruction still has
      its RAS extensions behaviour, but we no longer read DISR_EL1 as this
      register does depend on alternatives.
      
      This could go wrong if we want to synchronize an SError from a KVM
      guest. On a CPU that has the RAS extensions, but the KConfig option
      was disabled, we consume the pending SError with no chance of ever
      reading it.
      
      Hide the ESB-instruction behind the CONFIG_ARM64_RAS_EXTN option,
      outputting a regular nop if the feature has been disabled.
      Reported-by: NJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      2b68a2a9
  12. 01 7月, 2019 1 次提交