1. 09 1月, 2018 2 次提交
  2. 05 1月, 2018 1 次提交
    • C
      arm64: asid: Do not replace active_asids if already 0 · a8ffaaa0
      Catalin Marinas 提交于
      Under some uncommon timing conditions, a generation check and
      xchg(active_asids, A1) in check_and_switch_context() on P1 can race with
      an ASID roll-over on P2. If P2 has not seen the update to
      active_asids[P1], it can re-allocate A1 to a new task T2 on P2. P1 ends
      up waiting on the spinlock since the xchg() returned 0 while P2 can go
      through a second ASID roll-over with (T2,A1,G2) active on P2. This
      roll-over copies active_asids[P1] == A1,G1 into reserved_asids[P1] and
      active_asids[P2] == A1,G2 into reserved_asids[P2]. A subsequent
      scheduling of T1 on P1 and T2 on P2 would match reserved_asids and get
      their generation bumped to G3:
      
      P1					P2
      --                                      --
      TTBR0.BADDR = T0
      TTBR0.ASID = A0
      asid_generation = G1
      check_and_switch_context(T1,A1,G1)
        generation match
      					check_and_switch_context(T2,A0,G0)
       				          new_context()
      					    ASID roll-over
      					    asid_generation = G2
      					    flush_context()
      					      active_asids[P1] = 0
      					      asid_map[A1] = 0
      					      reserved_asids[P1] = A0,G0
        xchg(active_asids, A1)
          active_asids[P1] = A1,G1
          xchg returns 0
        spin_lock_irqsave()
      					    allocated ASID (T2,A1,G2)
      					    asid_map[A1] = 1
      					  active_asids[P2] = A1,G2
      					...
      					check_and_switch_context(T3,A0,G0)
      					  new_context()
      					    ASID roll-over
      					    asid_generation = G3
      					    flush_context()
      					      active_asids[P1] = 0
      					      asid_map[A1] = 1
      					      reserved_asids[P1] = A1,G1
      					      reserved_asids[P2] = A1,G2
      					    allocated ASID (T3,A2,G3)
      					    asid_map[A2] = 1
      					  active_asids[P2] = A2,G3
        new_context()
          check_update_reserved_asid(A1,G1)
            matches reserved_asid[P1]
            reserved_asid[P1] = A1,G3
        updated T1 ASID to (T1,A1,G3)
      					check_and_switch_context(T2,A1,G2)
      					  new_context()
      					    check_and_switch_context(A1,G2)
      					      matches reserved_asids[P2]
      					      reserved_asids[P2] = A1,G3
      					  updated T2 ASID to (T2,A1,G3)
      
      At this point, we have two tasks, T1 and T2 both using ASID A1 with the
      latest generation G3. Any of them is allowed to be scheduled on the
      other CPU leading to two different tasks with the same ASID on the same
      CPU.
      
      This patch changes the xchg to cmpxchg so that the active_asids is only
      updated if non-zero to avoid a race with an ASID roll-over on a
      different CPU.
      
      The ASID allocation algorithm has been formally verified using the TLA+
      model checker (see
      https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/tree/asidalloc.tla
      for the spec).
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a8ffaaa0
  3. 11 12月, 2017 2 次提交
  4. 01 12月, 2017 1 次提交
  5. 29 11月, 2017 1 次提交
    • M
      arm64: mm: cleanup stale AIVIVT references · f81a3487
      Mark Rutland 提交于
      Since commit:
      
        155433cb ("arm64: cache: Remove support for ASID-tagged VIVT I-caches")
      
      ... the kernel no longer cares about AIVIVT I-caches, as these were
      removed from the architecture.
      
      This patch removes the stale references to such I-caches.
      
      The comment in flush_context() is also updated to clarify when and where
      the TLB invalidation occurs.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f81a3487
  6. 21 3月, 2017 1 次提交
  7. 10 2月, 2017 1 次提交
    • C
      arm64: Work around Falkor erratum 1003 · 38fd94b0
      Christopher Covington 提交于
      The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries
      using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum
      is triggered, page table entries using the new translation table base
      address (BADDR) will be allocated into the TLB using the old ASID. All
      circumstances leading to the incorrect ASID being cached in the TLB arise
      when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory
      operation is in the process of performing a translation using the specific
      TTBRx_EL1 being written, and the memory operation uses a translation table
      descriptor designated as non-global. EL2 and EL3 code changing the EL1&0
      ASID is not subject to this erratum because hardware is prohibited from
      performing translations from an out-of-context translation regime.
      
      Consider the following pseudo code.
      
        write new BADDR and ASID values to TTBRx_EL1
      
      Replacing the above sequence with the one below will ensure that no TLB
      entries with an incorrect ASID are used by software.
      
        write reserved value to TTBRx_EL1[ASID]
        ISB
        write new value to TTBRx_EL1[BADDR]
        ISB
        write new value to TTBRx_EL1[ASID]
        ISB
      
      When the above sequence is used, page table entries using the new BADDR
      value may still be incorrectly allocated into the TLB using the reserved
      ASID. Yet this will not reduce functionality, since TLB entries incorrectly
      tagged with the reserved ASID will never be hit by a later instruction.
      
      Based on work by Shanker Donthineni <shankerd@codeaurora.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristopher Covington <cov@codeaurora.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      38fd94b0
  8. 22 11月, 2016 1 次提交
    • C
      arm64: Disable TTBR0_EL1 during normal kernel execution · 39bc88e5
      Catalin Marinas 提交于
      When the TTBR0 PAN feature is enabled, the kernel entry points need to
      disable access to TTBR0_EL1. The PAN status of the interrupted context
      is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
      Restoring access to TTBR0_EL1 is done on exception return if returning
      to user or returning to a context where PAN was disabled.
      
      Context switching via switch_mm() must defer the update of TTBR0_EL1
      until a return to user or an explicit uaccess_enable() call.
      
      Special care needs to be taken for two cases where TTBR0_EL1 is set
      outside the normal kernel context switch operation: EFI run-time
      services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
      Code has been added to avoid deferred TTBR0_EL1 switching as in
      switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
      special TTBR0_EL1.
      
      User cache maintenance (user_cache_maint_handler and
      __flush_cache_user_range) needs the TTBR0_EL1 re-instated since the
      operations are performed by user virtual address.
      
      This patch also removes a stale comment on the switch_mm() function.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      39bc88e5
  9. 22 6月, 2016 1 次提交
    • J
      arm64: update ASID limit · f7e0efc9
      Jean-Philippe Brucker 提交于
      During a rollover, we mark the active ASID on each CPU as reserved, before
      allocating a new ID for the task that caused the rollover. This means that
      with N CPUs, we can only guarantee the new task to obtain a valid ASID if
      we have at least N+1 ASIDs. Update this limit in the initcall check.
      
      Note that this restriction was introduced by commit 8e648066 on the
      arch/arm side, which disallow re-using the previously active ASID on the
      local CPU, as it would introduce a TLB race.
      
      In addition, we only dispose of NUM_USER_ASIDS-1, since ASID 0 is
      reserved. Add this restriction as well.
      Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f7e0efc9
  10. 16 4月, 2016 1 次提交
    • S
      arm64: Add cpu_panic_kernel helper · 17eebd1a
      Suzuki K Poulose 提交于
      During the activation of a secondary CPU, we could report serious
      configuration issues and hence request to crash the kernel. We do
      this for CPU ASID bit check now. We will need it also for handling
      mismatched exception levels for the CPUs with VHE. Hence, add a
      helper to do the same for reusability.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      17eebd1a
  11. 04 3月, 2016 1 次提交
    • M
      arm64: make mrs_s prefixing implicit in read_cpuid · 1cc6ed90
      Mark Rutland 提交于
      Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use
      sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
      register names, to allow manual assembly of registers unknown by the
      toolchain, using tables in sysreg.h.
      
      This interacts poorly with commit 42b55734 ("efi/arm64: Check
      for h/w support before booting a >4 KB granular kernel"), which is
      curretly queued via the tip tree, and uses read_cpuid without a SYS_
      prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
      are selected.
      
      To avoid this issue when trees are merged, move the required SYS_
      prefixing into read_cpuid, and revert all of the updated callsites to
      pass plain register names. This effectively reverts the bulk of commit
      0f54b14e.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1cc6ed90
  12. 25 2月, 2016 3 次提交
  13. 18 2月, 2016 1 次提交
  14. 26 11月, 2015 1 次提交
    • W
      arm64: mm: keep reserved ASIDs in sync with mm after multiple rollovers · 0ebea808
      Will Deacon 提交于
      Under some unusual context-switching patterns, it is possible to end up
      with multiple threads from the same mm running concurrently with
      different ASIDs:
      
      1. CPU x schedules task t with mm p containing ASID a and generation g
         This task doesn't block and the CPU doesn't context switch.
         So:
           * per_cpu(active_asid, x) = {g,a}
           * p->context.id = {g,a}
      
      2. Some other CPU generates an ASID rollover. The global generation is
         now (g + 1). CPU x is still running t, with no context switch and
         so per_cpu(reserved_asid, x) = {g,a}
      
      3. CPU y schedules task t', which shares mm p with t. The generation
         mismatches, so we take the slowpath and hit the reserved ASID from
         CPU x. p is then updated so that p->context.id = {g + 1,a}
      
      4. CPU y schedules some other task u, which has an mm != p.
      
      5. Some other CPU generates *another* CPU rollover. The global
         generation is now (g + 2). CPU x is still running t, with no context
         switch and so per_cpu(reserved_asid, x) = {g,a}.
      
      6. CPU y once again schedules task t', but now *fails* to hit the
         reserved ASID from CPU x because of the generation mismatch. This
         results in a new ASID being allocated, despite the fact that t is
         still running on CPU x with the same mm.
      
      Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised
      between the two threads.
      
      This patch fixes the problem by updating all of the matching reserved
      ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps
      the reserved ASIDs in-sync with the mm and avoids the problem.
      Reported-by: NTony Thompson <anthony.thompson@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0ebea808
  15. 07 10月, 2015 4 次提交
  16. 27 7月, 2015 1 次提交
  17. 12 6月, 2015 1 次提交
  18. 17 9月, 2012 1 次提交