1. 09 8月, 2019 2 次提交
    • S
      arm64: mm: Introduce 52-bit Kernel VAs · b6d00d47
      Steve Capper 提交于
      Most of the machinery is now in place to enable 52-bit kernel VAs that
      are detectable at boot time.
      
      This patch adds a Kconfig option for 52-bit user and kernel addresses
      and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
      physvirt_offset and vmemmap at early boot.
      
      To simplify things this patch also removes the 52-bit user/48-bit kernel
      kconfig option.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b6d00d47
    • S
      arm64: mm: Logic to make offset_ttbr1 conditional · c812026c
      Steve Capper 提交于
      When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
      ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
      be used for both userspace and kernel.
      
      Moving on to a 52-bit kernel VA we no longer require this offset to
      ttbr1_el1 should we be running on a system with HW support for 52-bit
      VAs.
      
      This patch introduces conditional logic to offset_ttbr1 to query
      SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW
      support for 52-bit VAs then the ttbr1 offset is skipped.
      
      We choose to read a system register rather than vabits_actual because
      offset_ttbr1 can be called in places where the kernel data is not
      actually mapped.
      
      Calls to offset_ttbr1 appear to be made from rarely called code paths so
      this extra logic is not expected to adversely affect performance.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      c812026c
  2. 19 6月, 2019 1 次提交
  3. 09 4月, 2019 1 次提交
  4. 01 3月, 2019 1 次提交
    • Z
      arm64: Add workaround for Fujitsu A64FX erratum 010001 · 3e32131a
      Zhang Lei 提交于
      On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause
      an undefined fault (Data abort, DFSC=0b111111). This fault occurs under
      a specific hardware condition when a load/store instruction performs an
      address translation. Any load/store instruction, except non-fault access
      including Armv8 and SVE might cause this undefined fault.
      
      The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE
      is enabled to mitigate timing attacks against KASLR where the kernel
      address space could be probed using the FFR and suppressed fault on
      SVE loads.
      
      Since this erratum causes spurious exceptions, which may corrupt
      the exception registers, we clear the TCR_ELx.NFDx=1 bits when
      booting on an affected CPU.
      Signed-off-by: NZhang Lei <zhang.lei@jp.fujitsu.com>
      [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler
       and always disabled the NFDx bits on affected CPUs]
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3e32131a
  5. 06 2月, 2019 1 次提交
  6. 29 12月, 2018 1 次提交
  7. 11 12月, 2018 3 次提交
    • W
      arm64: Kconfig: Re-jig CONFIG options for 52-bit VA · 68d23da4
      Will Deacon 提交于
      Enabling 52-bit VAs for userspace is pretty confusing, since it requires
      you to select "48-bit" virtual addressing in the Kconfig.
      
      Rework the logic so that 52-bit user virtual addressing is advertised in
      the "Virtual address space size" choice, along with some help text to
      describe its interaction with Pointer Authentication. The EXPERT-only
      option to force all user mappings to the 52-bit range is then made
      available immediately below the VA size selection.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      68d23da4
    • S
      arm64: mm: introduce 52-bit userspace support · 67e7fdfc
      Steve Capper 提交于
      On arm64 there is optional support for a 52-bit virtual address space.
      To exploit this one has to be running with a 64KB page size and be
      running on hardware that supports this.
      
      For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
      some changes are needed to support a 52-bit userspace:
       * TCR_EL1.T0SZ needs to be 12 instead of 16,
       * TASK_SIZE needs to reflect the new size.
      
      This patch implements the above when the support for 52-bit VAs is
      detected at early boot time.
      
      On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
      well as userspace, TTBR0_EL1 controls:
       * The identity mapping,
       * EFI runtime code.
      
      It is possible to run a kernel with an identity mapping that has a
      larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
      would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
      52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
      12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
      disabled.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      67e7fdfc
    • S
      arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD · e842dfb5
      Steve Capper 提交于
      Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64
      entries (for the 48-bit case) to 1024 entries. This quantity,
      PTRS_PER_PGD is used as follows to compute which PGD entry corresponds
      to a given virtual address, addr:
      
      pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)
      
      Userspace addresses are prefixed by 0's, so for a 48-bit userspace
      address, uva, the following is true:
      (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words, a 48-bit userspace address will have the same pgd_index
      when using PTRS_PER_PGD = 64 and 1024.
      
      Kernel addresses are prefixed by 1's so, given a 48-bit kernel address,
      kva, we have the following inequality:
      (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words a 48-bit kernel virtual address will have a different
      pgd_index when using PTRS_PER_PGD = 64 and 1024.
      
      If, however, we note that:
      kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b)
      and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE)
      
      We can consider:
      (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1)
       = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F	// "lower" cancels out
       = 0x3C0
      
      In other words, one can switch PTRS_PER_PGD to the 52-bit value globally
      provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when
      running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16).
      
      For kernel configuration where 52-bit userspace VAs are possible, this
      patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the
      52-bit value.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Suggested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      [will: added comment to TTBR1_BADDR_4852_OFFSET calculation]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e842dfb5
  8. 18 9月, 2018 1 次提交
    • V
      arm64: mm: Support Common Not Private translations · 5ffdfaed
      Vladimir Murzin 提交于
      Common Not Private (CNP) is a feature of ARMv8.2 extension which
      allows translation table entries to be shared between different PEs in
      the same inner shareable domain, so the hardware can use this fact to
      optimise the caching of such entries in the TLB.
      
      CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to
      the hardware that the translation table entries pointed to by this
      TTBR are the same as every PE in the same inner shareable domain for
      which the equivalent TTBR also has CNP bit set. In case CNP bit is set
      but TTBR does not point at the same translation table entries for a
      given ASID and VMID, then the system is mis-configured, so the results
      of translations are UNPREDICTABLE.
      
      For kernel we postpone setting CNP till all cpus are up and rely on
      cpufeature framework to 1) patch the code which is sensitive to CNP
      and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be
      reprogrammed as result of hibernation or cpuidle (via __enable_mmu).
      For these two cases we restore CnP bit via __cpu_suspend_exit().
      
      There are a few cases we need to care of changes in TTBR0_EL1:
        - a switch to idmap
        - software emulated PAN
      
      we rule out latter via Kconfig options and for the former we make
      sure that CNP is set for non-zero ASIDs only.
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      [catalin.marinas@arm.com: default y for CONFIG_ARM64_CNP]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5ffdfaed
  9. 23 6月, 2018 1 次提交
  10. 27 3月, 2018 1 次提交
    • S
      arm64: Delay enabling hardware DBM feature · 05abb595
      Suzuki K Poulose 提交于
      We enable hardware DBM bit in a capable CPU, very early in the
      boot via __cpu_setup. This doesn't give us a flexibility of
      optionally disable the feature, as the clearing the bit
      is a bit costly as the TLB can cache the settings. Instead,
      we delay enabling the feature until the CPU is brought up
      into the kernel. We use the feature capability mechanism
      to handle it.
      
      The hardware DBM is a non-conflicting feature. i.e, the kernel
      can safely run with a mix of CPUs with some using the feature
      and the others don't. So, it is safe for a late CPU to have
      this capability and enable it, even if the active CPUs don't.
      
      To get this handled properly by the infrastructure, we
      unconditionally set the capability and only enable it
      on CPUs which really have the feature. Also, we print the
      feature detection from the "matches" call back to make sure
      we don't mislead the user when none of the CPUs could use the
      feature.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      05abb595
  11. 07 3月, 2018 1 次提交
  12. 15 2月, 2018 1 次提交
  13. 07 2月, 2018 3 次提交
  14. 27 1月, 2018 1 次提交
  15. 17 1月, 2018 1 次提交
    • C
      arm64: kpti: Fix the interaction between ASID switching and software PAN · 6b88a32c
      Catalin Marinas 提交于
      With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the
      active ASID to decide whether user access was enabled (non-zero ASID)
      when the exception was taken. On return from exception, if user access
      was previously disabled, it re-instates TTBR0_EL1 from the per-thread
      saved value (updated in switch_mm() or efi_set_pgd()).
      
      Commit 7655abb9 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a
      TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e7
      ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the
      __uaccess_ttbr0_disable() function and asm macro to first write the
      reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an
      exception occurs between these two, the exception return code will
      re-instate a valid TTBR0_EL1. Similar scenario can happen in
      cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID
      update in cpu_do_switch_mm().
      
      This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and
      disables the interrupts around the TTBR0_EL1 and ASID switching code in
      __uaccess_ttbr0_disable(). It also ensures that, when returning from the
      EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in
      TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}.
      
      The accesses to current_thread_info()->ttbr0 are updated to use
      READ_ONCE/WRITE_ONCE.
      
      As a safety measure, __uaccess_ttbr0_enable() always masks out any
      existing non-zero ASID TTBR1_EL1 before writing in the new ASID.
      
      Fixes: 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN")
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6b88a32c
  16. 16 1月, 2018 2 次提交
    • J
      arm64: kernel: Prepare for a DISR user · 68ddbf09
      James Morse 提交于
      KVM would like to consume any pending SError (or RAS error) after guest
      exit. Today it has to unmask SError and use dsb+isb to synchronise the
      CPU. With the RAS extensions we can use ESB to synchronise any pending
      SError.
      
      Add the necessary macros to allow DISR to be read and converted to an
      ESR.
      
      We clear the DISR register when we enable the RAS cpufeature, and the
      kernel has not executed any ESB instructions. Any value we find in DISR
      must have belonged to firmware. Executing an ESB instruction is the
      only way to update DISR, so we can expect firmware to have handled
      any deferred SError. By the same logic we clear DISR in the idle path.
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      68ddbf09
    • J
      arm64: sysreg: Move to use definitions for all the SCTLR bits · 7a00d68e
      James Morse 提交于
      __cpu_setup() configures SCTLR_EL1 using some hard coded hex masks,
      and el2_setup() duplicates some this when setting RES1 bits.
      
      Lets make this the same as KVM's hyp_init, which uses named bits.
      
      First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0}
      bits, and those we want to set or clear.
      
      Add a build_bug checks to ensures all bits are either set or clear.
      This means we don't need to preserve endian-ness configuration
      generated elsewhere.
      
      Finally, move the head.S and proc.S users of these hard-coded masks
      over to the macro versions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7a00d68e
  17. 13 1月, 2018 1 次提交
  18. 09 1月, 2018 1 次提交
  19. 23 12月, 2017 2 次提交
  20. 11 12月, 2017 3 次提交
  21. 02 11月, 2017 1 次提交
  22. 24 2月, 2017 1 次提交
    • S
      arm64: Avoid clobbering mm in erratum workaround on QDF2400 · ea6eac90
      Shanker Donthineni 提交于
      Commit 38fd94b0 ("arm64: Work around Falkor erratum 1003") tried to
      work around a hardware erratum, but actually caused a system crash of
      its own during switch_mm:
      
       cpu_do_switch_mm+0x20/0x40
       efi_virtmap_load+0x34/0x40
       virt_efi_get_next_variable+0x64/0xc8
       efivar_init+0x8c/0x348
       efisubsys_init+0xd4/0x270
       do_one_initcall+0x80/0x110
       kernel_init_freeable+0x19c/0x240
       kernel_init+0x10/0x100
       ret_from_fork+0x10/0x50
      
       Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      In cpu_do_switch_mm, x1 contains the mm_struct pointer, which needs to
      be preserved by the pre_ttbr0_update_workaround macro rather than passed
      as a temporary.
      
      This patch clobbers x2 and x3 instead, keeping the mm_struct intact
      after the workaround has run.
      
      Fixes: 38fd94b0 ("arm64: Work around Falkor erratum 1003")
      Tested-by: NManoj Iyer <manoj.iyer@canonical.com>
      Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ea6eac90
  23. 10 2月, 2017 1 次提交
    • C
      arm64: Work around Falkor erratum 1003 · 38fd94b0
      Christopher Covington 提交于
      The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries
      using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum
      is triggered, page table entries using the new translation table base
      address (BADDR) will be allocated into the TLB using the old ASID. All
      circumstances leading to the incorrect ASID being cached in the TLB arise
      when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory
      operation is in the process of performing a translation using the specific
      TTBRx_EL1 being written, and the memory operation uses a translation table
      descriptor designated as non-global. EL2 and EL3 code changing the EL1&0
      ASID is not subject to this erratum because hardware is prohibited from
      performing translations from an out-of-context translation regime.
      
      Consider the following pseudo code.
      
        write new BADDR and ASID values to TTBRx_EL1
      
      Replacing the above sequence with the one below will ensure that no TLB
      entries with an incorrect ASID are used by software.
      
        write reserved value to TTBRx_EL1[ASID]
        ISB
        write new value to TTBRx_EL1[BADDR]
        ISB
        write new value to TTBRx_EL1[ASID]
        ISB
      
      When the above sequence is used, page table entries using the new BADDR
      value may still be incorrectly allocated into the TLB using the reserved
      ASID. Yet this will not reduce functionality, since TLB entries incorrectly
      tagged with the reserved ASID will never be hit by a later instruction.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      38fd94b0
  24. 22 11月, 2016 1 次提交
  25. 12 11月, 2016 1 次提交
    • M
      arm64: move sp_el0 and tpidr_el1 into cpu_suspend_ctx · 623b476f
      Mark Rutland 提交于
      When returning from idle, we rely on the fact that thread_info lives at
      the end of the kernel stack, and restore this by masking the saved stack
      pointer. Subsequent patches will sever the relationship between the
      stack and thread_info, and to cater for this we must save/restore sp_el0
      explicitly, storing it in cpu_suspend_ctx.
      
      As cpu_suspend_ctx must be doubleword aligned, this leaves us with an
      extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1
      in the same way, which simplifies the code, avoiding pointer chasing on
      the restore path (as we no longer need to load thread_info::cpu followed
      by the relevant slot in __per_cpu_offset based on this).
      
      This patch stashes both registers in cpu_suspend_ctx.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      623b476f
  26. 12 9月, 2016 1 次提交
    • M
      arm64: use alternative auto-nop · 6ba3b554
      Mark Rutland 提交于
      Make use of the new alternative_if and alternative_else_nop_endif and
      get rid of our homebew NOP sleds, making the code simpler to read.
      
      Note that for cpu_do_switch_mm the ret has been moved out of the
      alternative sequence, and in the default case there will be three
      additional NOPs executed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6ba3b554
  27. 03 9月, 2016 1 次提交
  28. 26 8月, 2016 1 次提交
    • J
      arm64: vmlinux.ld: Add mmuoff data sections and move mmuoff text into idmap · b6113038
      James Morse 提交于
      Resume from hibernate needs to clean any text executed by the kernel with
      the MMU off to the PoC. Collect these functions together into the
      .idmap.text section as all this code is tightly coupled and also needs
      the same cleaning after resume.
      
      Data is more complicated, secondary_holding_pen_release is written with
      the MMU on, clean and invalidated, then read with the MMU off. In contrast
      __boot_cpu_mode is written with the MMU off, the corresponding cache line
      is invalidated, so when we read it with the MMU on we don't get stale data.
      These cache maintenance operations conflict with each other if the values
      are within a Cache Writeback Granule (CWG) of each other.
      Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write,
      the linker script ensures mmuoff.data.write section is aligned to the
      architectural maximum CWG of 2KB.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b6113038
  29. 19 7月, 2016 1 次提交
    • W
      arm64: debug: unmask PSTATE.D earlier · 2ce39ad1
      Will Deacon 提交于
      Clearing PSTATE.D is one of the requirements for generating a debug
      exception. The arm64 booting protocol requires that PSTATE.D is set,
      since many of the debug registers (for example, the hw_breakpoint
      registers) are UNKNOWN out of reset and could potentially generate
      spurious, fatal debug exceptions in early boot code if PSTATE.D was
      clear. Once the debug registers have been safely initialised, PSTATE.D
      is cleared, however this is currently broken for two reasons:
      
      (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary
          CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall
          runs after SMP (and the scheduler) have been initialised, there is
          no guarantee that it is actually running on the boot CPU. In this
          case, the boot CPU is left with PSTATE.D set and is not capable of
          generating debug exceptions.
      
      (2) In a preemptible kernel, we may explicitly schedule on the IRQ
          return path to EL1. If an IRQ occurs with PSTATE.D set in the idle
          thread, then we may schedule the kthread_init thread, run the
          postcore_initcall to clear PSTATE.D and then context switch back
          to the idle thread before returning from the IRQ. The exception
          return path will then restore PSTATE.D from the stack, and set it
          again.
      
      This patch fixes the problem by moving the clearing of PSTATE.D earlier
      to proc.S. This has the desirable effect of clearing it in one place for
      all CPUs, long before we have to worry about the scheduler or any
      exception handling. We ensure that the previous reset of MDSCR_EL1 has
      completed before unmasking the exception, so that any spurious
      exceptions resulting from UNKNOWN debug registers are not generated.
      
      Without this patch applied, the kprobes selftests have been seen to fail
      under KVM, where we end up attempting to step the OOL instruction buffer
      with PSTATE.D set and therefore fail to complete the step.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2ce39ad1
  30. 28 4月, 2016 2 次提交