1. 07 3月, 2018 1 次提交
  2. 15 2月, 2018 1 次提交
  3. 07 2月, 2018 3 次提交
  4. 27 1月, 2018 1 次提交
  5. 17 1月, 2018 1 次提交
    • C
      arm64: kpti: Fix the interaction between ASID switching and software PAN · 6b88a32c
      Catalin Marinas 提交于
      With ARM64_SW_TTBR0_PAN enabled, the exception entry code checks the
      active ASID to decide whether user access was enabled (non-zero ASID)
      when the exception was taken. On return from exception, if user access
      was previously disabled, it re-instates TTBR0_EL1 from the per-thread
      saved value (updated in switch_mm() or efi_set_pgd()).
      
      Commit 7655abb9 ("arm64: mm: Move ASID from TTBR0 to TTBR1") makes a
      TTBR0_EL1 + ASID switching non-atomic. Subsequently, commit 27a921e7
      ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN") changes the
      __uaccess_ttbr0_disable() function and asm macro to first write the
      reserved TTBR0_EL1 followed by the ASID=0 update in TTBR1_EL1. If an
      exception occurs between these two, the exception return code will
      re-instate a valid TTBR0_EL1. Similar scenario can happen in
      cpu_switch_mm() between setting the reserved TTBR0_EL1 and the ASID
      update in cpu_do_switch_mm().
      
      This patch reverts the entry.S check for ASID == 0 to TTBR0_EL1 and
      disables the interrupts around the TTBR0_EL1 and ASID switching code in
      __uaccess_ttbr0_disable(). It also ensures that, when returning from the
      EFI runtime services, efi_set_pgd() doesn't leave a non-zero ASID in
      TTBR1_EL1 by using uaccess_ttbr0_{enable,disable}.
      
      The accesses to current_thread_info()->ttbr0 are updated to use
      READ_ONCE/WRITE_ONCE.
      
      As a safety measure, __uaccess_ttbr0_enable() always masks out any
      existing non-zero ASID TTBR1_EL1 before writing in the new ASID.
      
      Fixes: 27a921e7 ("arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN")
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reported-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Co-developed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6b88a32c
  6. 16 1月, 2018 2 次提交
    • J
      arm64: kernel: Prepare for a DISR user · 68ddbf09
      James Morse 提交于
      KVM would like to consume any pending SError (or RAS error) after guest
      exit. Today it has to unmask SError and use dsb+isb to synchronise the
      CPU. With the RAS extensions we can use ESB to synchronise any pending
      SError.
      
      Add the necessary macros to allow DISR to be read and converted to an
      ESR.
      
      We clear the DISR register when we enable the RAS cpufeature, and the
      kernel has not executed any ESB instructions. Any value we find in DISR
      must have belonged to firmware. Executing an ESB instruction is the
      only way to update DISR, so we can expect firmware to have handled
      any deferred SError. By the same logic we clear DISR in the idle path.
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      68ddbf09
    • J
      arm64: sysreg: Move to use definitions for all the SCTLR bits · 7a00d68e
      James Morse 提交于
      __cpu_setup() configures SCTLR_EL1 using some hard coded hex masks,
      and el2_setup() duplicates some this when setting RES1 bits.
      
      Lets make this the same as KVM's hyp_init, which uses named bits.
      
      First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0}
      bits, and those we want to set or clear.
      
      Add a build_bug checks to ensures all bits are either set or clear.
      This means we don't need to preserve endian-ness configuration
      generated elsewhere.
      
      Finally, move the head.S and proc.S users of these hard-coded masks
      over to the macro versions.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7a00d68e
  7. 13 1月, 2018 1 次提交
  8. 09 1月, 2018 1 次提交
  9. 23 12月, 2017 2 次提交
  10. 11 12月, 2017 3 次提交
  11. 02 11月, 2017 1 次提交
  12. 24 2月, 2017 1 次提交
    • S
      arm64: Avoid clobbering mm in erratum workaround on QDF2400 · ea6eac90
      Shanker Donthineni 提交于
      Commit 38fd94b0 ("arm64: Work around Falkor erratum 1003") tried to
      work around a hardware erratum, but actually caused a system crash of
      its own during switch_mm:
      
       cpu_do_switch_mm+0x20/0x40
       efi_virtmap_load+0x34/0x40
       virt_efi_get_next_variable+0x64/0xc8
       efivar_init+0x8c/0x348
       efisubsys_init+0xd4/0x270
       do_one_initcall+0x80/0x110
       kernel_init_freeable+0x19c/0x240
       kernel_init+0x10/0x100
       ret_from_fork+0x10/0x50
      
       Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      In cpu_do_switch_mm, x1 contains the mm_struct pointer, which needs to
      be preserved by the pre_ttbr0_update_workaround macro rather than passed
      as a temporary.
      
      This patch clobbers x2 and x3 instead, keeping the mm_struct intact
      after the workaround has run.
      
      Fixes: 38fd94b0 ("arm64: Work around Falkor erratum 1003")
      Tested-by: NManoj Iyer <manoj.iyer@canonical.com>
      Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ea6eac90
  13. 10 2月, 2017 1 次提交
    • C
      arm64: Work around Falkor erratum 1003 · 38fd94b0
      Christopher Covington 提交于
      The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries
      using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum
      is triggered, page table entries using the new translation table base
      address (BADDR) will be allocated into the TLB using the old ASID. All
      circumstances leading to the incorrect ASID being cached in the TLB arise
      when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory
      operation is in the process of performing a translation using the specific
      TTBRx_EL1 being written, and the memory operation uses a translation table
      descriptor designated as non-global. EL2 and EL3 code changing the EL1&0
      ASID is not subject to this erratum because hardware is prohibited from
      performing translations from an out-of-context translation regime.
      
      Consider the following pseudo code.
      
        write new BADDR and ASID values to TTBRx_EL1
      
      Replacing the above sequence with the one below will ensure that no TLB
      entries with an incorrect ASID are used by software.
      
        write reserved value to TTBRx_EL1[ASID]
        ISB
        write new value to TTBRx_EL1[BADDR]
        ISB
        write new value to TTBRx_EL1[ASID]
        ISB
      
      When the above sequence is used, page table entries using the new BADDR
      value may still be incorrectly allocated into the TLB using the reserved
      ASID. Yet this will not reduce functionality, since TLB entries incorrectly
      tagged with the reserved ASID will never be hit by a later instruction.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      38fd94b0
  14. 22 11月, 2016 1 次提交
  15. 12 11月, 2016 1 次提交
    • M
      arm64: move sp_el0 and tpidr_el1 into cpu_suspend_ctx · 623b476f
      Mark Rutland 提交于
      When returning from idle, we rely on the fact that thread_info lives at
      the end of the kernel stack, and restore this by masking the saved stack
      pointer. Subsequent patches will sever the relationship between the
      stack and thread_info, and to cater for this we must save/restore sp_el0
      explicitly, storing it in cpu_suspend_ctx.
      
      As cpu_suspend_ctx must be doubleword aligned, this leaves us with an
      extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1
      in the same way, which simplifies the code, avoiding pointer chasing on
      the restore path (as we no longer need to load thread_info::cpu followed
      by the relevant slot in __per_cpu_offset based on this).
      
      This patch stashes both registers in cpu_suspend_ctx.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      623b476f
  16. 12 9月, 2016 1 次提交
    • M
      arm64: use alternative auto-nop · 6ba3b554
      Mark Rutland 提交于
      Make use of the new alternative_if and alternative_else_nop_endif and
      get rid of our homebew NOP sleds, making the code simpler to read.
      
      Note that for cpu_do_switch_mm the ret has been moved out of the
      alternative sequence, and in the default case there will be three
      additional NOPs executed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6ba3b554
  17. 03 9月, 2016 1 次提交
  18. 26 8月, 2016 1 次提交
    • J
      arm64: vmlinux.ld: Add mmuoff data sections and move mmuoff text into idmap · b6113038
      James Morse 提交于
      Resume from hibernate needs to clean any text executed by the kernel with
      the MMU off to the PoC. Collect these functions together into the
      .idmap.text section as all this code is tightly coupled and also needs
      the same cleaning after resume.
      
      Data is more complicated, secondary_holding_pen_release is written with
      the MMU on, clean and invalidated, then read with the MMU off. In contrast
      __boot_cpu_mode is written with the MMU off, the corresponding cache line
      is invalidated, so when we read it with the MMU on we don't get stale data.
      These cache maintenance operations conflict with each other if the values
      are within a Cache Writeback Granule (CWG) of each other.
      Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write,
      the linker script ensures mmuoff.data.write section is aligned to the
      architectural maximum CWG of 2KB.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b6113038
  19. 19 7月, 2016 1 次提交
    • W
      arm64: debug: unmask PSTATE.D earlier · 2ce39ad1
      Will Deacon 提交于
      Clearing PSTATE.D is one of the requirements for generating a debug
      exception. The arm64 booting protocol requires that PSTATE.D is set,
      since many of the debug registers (for example, the hw_breakpoint
      registers) are UNKNOWN out of reset and could potentially generate
      spurious, fatal debug exceptions in early boot code if PSTATE.D was
      clear. Once the debug registers have been safely initialised, PSTATE.D
      is cleared, however this is currently broken for two reasons:
      
      (1) The boot CPU clears PSTATE.D in a postcore_initcall and secondary
          CPUs clear PSTATE.D in secondary_start_kernel. Since the initcall
          runs after SMP (and the scheduler) have been initialised, there is
          no guarantee that it is actually running on the boot CPU. In this
          case, the boot CPU is left with PSTATE.D set and is not capable of
          generating debug exceptions.
      
      (2) In a preemptible kernel, we may explicitly schedule on the IRQ
          return path to EL1. If an IRQ occurs with PSTATE.D set in the idle
          thread, then we may schedule the kthread_init thread, run the
          postcore_initcall to clear PSTATE.D and then context switch back
          to the idle thread before returning from the IRQ. The exception
          return path will then restore PSTATE.D from the stack, and set it
          again.
      
      This patch fixes the problem by moving the clearing of PSTATE.D earlier
      to proc.S. This has the desirable effect of clearing it in one place for
      all CPUs, long before we have to worry about the scheduler or any
      exception handling. We ensure that the previous reset of MDSCR_EL1 has
      completed before unmasking the exception, so that any spurious
      exceptions resulting from UNKNOWN debug registers are not generated.
      
      Without this patch applied, the kprobes selftests have been seen to fail
      under KVM, where we end up attempting to step the OOL instruction buffer
      with PSTATE.D set and therefore fail to complete the step.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2ce39ad1
  20. 28 4月, 2016 2 次提交
  21. 26 2月, 2016 1 次提交
  22. 16 2月, 2016 1 次提交
    • M
      arm64: mm: add code to safely replace TTBR1_EL1 · 50e1881d
      Mark Rutland 提交于
      If page tables are modified without suitable TLB maintenance, the ARM
      architecture permits multiple TLB entries to be allocated for the same
      VA. When this occurs, it is permitted that TLB conflict aborts are
      raised in response to synchronous data/instruction accesses, and/or and
      amalgamation of the TLB entries may be used as a result of a TLB lookup.
      
      The presence of conflicting TLB entries may result in a variety of
      behaviours detrimental to the system (e.g. erroneous physical addresses
      may be used by I-cache fetches and/or page table walks). Some of these
      cases may result in unexpected changes of hardware state, and/or result
      in the (asynchronous) delivery of SError.
      
      To avoid these issues, we must avoid situations where conflicting
      entries may be allocated into TLBs. For user and module mappings we can
      follow a strict break-before-make approach, but this cannot work for
      modifications to the swapper page tables that cover the kernel text and
      data.
      
      Instead, this patch adds code which is intended to be executed from the
      idmap, which can safely unmap the swapper page tables as it only
      requires the idmap to be active. This enables us to uninstall the active
      TTBR1_EL1 entry, invalidate TLBs, then install a new TTBR1_EL1 entry
      without potentially unmapping code or data required for the sequence.
      This avoids the risk of conflict, but requires that updates are staged
      in a copy of the swapper page tables prior to being installed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      50e1881d
  23. 25 1月, 2016 1 次提交
  24. 21 12月, 2015 1 次提交
    • L
      arm64: kernel: enforce pmuserenr_el0 initialization and restore · 60792ad3
      Lorenzo Pieralisi 提交于
      The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
      Current kernel code resets that register value iff the core pmu device is
      correctly probed in the kernel. On platforms with missing DT pmu nodes (or
      disabled perf events in the kernel), the pmu is not probed, therefore the
      pmuserenr_el0 register is not reset in the kernel, which means that its
      value retains the reset value that is architecturally UNKNOWN (system
      may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
      is available at EL0, which must be disallowed).
      
      This patch adds code that resets pmuserenr_el0 on cold boot and restores
      it on core resume from shutdown, so that the pmuserenr_el0 setup is
      always enforced in the kernel.
      
      Cc: <stable@vger.kernel.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      60792ad3
  25. 12 12月, 2015 1 次提交
    • M
      arm64: mm: place __cpu_setup in .text · f00083ca
      Mark Rutland 提交于
      We drop __cpu_setup in .text.init, which ends up being part of .text.
      The .text.init section was a legacy section name which has been unused
      elsewhere for a long time.
      
      The ".text.init" name is misleading if read as a synonym for
      ".init.text". Any CPU may execute __cpu_setup before turning the MMU on,
      so it should simply live in .text.
      
      Remove the pointless section assignment. This will leave __cpu_setup in
      the .text section.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f00083ca
  26. 20 10月, 2015 1 次提交
  27. 07 10月, 2015 2 次提交
  28. 20 8月, 2015 1 次提交
  29. 08 8月, 2015 1 次提交
  30. 05 8月, 2015 1 次提交
    • W
      arm64: mm: ensure patched kernel text is fetched from PoU · 8ec41987
      Will Deacon 提交于
      The arm64 booting document requires that the bootloader has cleaned the
      kernel image to the PoC. However, when a CPU re-enters the kernel due to
      either a CPU hotplug "on" event or resuming from a low-power state (e.g.
      cpuidle), the kernel text may in-fact be dirty at the PoU due to things
      like alternative patching or even module loading.
      
      Thanks to I-cache speculation with the MMU off, stale instructions could
      be fetched prior to enabling the MMU, potentially leading to crashes
      when executing regions of code that have been modified at runtime.
      
      This patch addresses the issue by ensuring that the local I-cache is
      invalidated immediately after a CPU has enabled its MMU but before
      jumping out of the identity mapping. Any stale instructions fetched from
      the PoC will then be discarded and refetched correctly from the PoU.
      Patching kernel text executed prior to the MMU being enabled is
      prohibited, so the early entry code will always be clean.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8ec41987
  31. 27 7月, 2015 2 次提交
    • W
      arm64: force CONFIG_SMP=y and remove redundant #ifdefs · 4b3dc967
      Will Deacon 提交于
      Nobody seems to be producing !SMP systems anymore, so this is just
      becoming a source of kernel bugs, particularly if people want to use
      coherent DMA with non-shared pages.
      
      This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
      code in the process.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4b3dc967
    • C
      arm64: Add support for hardware updates of the access and dirty pte bits · 2f4b829c
      Catalin Marinas 提交于
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit
      cleared in the page table, instead of raising an access flag fault the
      CPU sets the actual page table entry bit. To ensure that kernel
      modifications to the page tables do not inadvertently revert a change
      introduced by hardware updates, the exclusive monitor (ldxr/stxr) is
      adopted in the pte accessors.
      
      When TCR_EL1.HD is enabled, a write access to a memory location with the
      DBM (Dirty Bit Management) bit set in the corresponding pte
      automatically clears the read-only bit (AP[2]). Such DBM bit maps onto
      the Linux PTE_WRITE bit and to check whether a writable (DBM set) page
      is dirty, the kernel tests the PTE_RDONLY bit. In order to allow
      read-only and dirty pages, the kernel needs to preserve the software
      dirty bit. The hardware dirty status is transferred to the software
      dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and
      pte_modify().
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2f4b829c