1. 09 1月, 2018 8 次提交
  2. 05 1月, 2018 2 次提交
    • D
      arm64: v8.4: Support for new floating point multiplication instructions · 3b3b6810
      Dongjiu Geng 提交于
      ARM v8.4 extensions add new neon instructions for performing a
      multiplication of each FP16 element of one vector with the corresponding
      FP16 element of a second vector, and to add or subtract this without an
      intermediate rounding to the corresponding FP32 element in a third vector.
      
      This patch detects this feature and let the userspace know about it via a
      HWCAP bit and MRS emulation.
      
      Cc: Dave Martin <Dave.Martin@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NDongjiu Geng <gengdongjiu@huawei.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3b3b6810
    • C
      arm64: asid: Do not replace active_asids if already 0 · a8ffaaa0
      Catalin Marinas 提交于
      Under some uncommon timing conditions, a generation check and
      xchg(active_asids, A1) in check_and_switch_context() on P1 can race with
      an ASID roll-over on P2. If P2 has not seen the update to
      active_asids[P1], it can re-allocate A1 to a new task T2 on P2. P1 ends
      up waiting on the spinlock since the xchg() returned 0 while P2 can go
      through a second ASID roll-over with (T2,A1,G2) active on P2. This
      roll-over copies active_asids[P1] == A1,G1 into reserved_asids[P1] and
      active_asids[P2] == A1,G2 into reserved_asids[P2]. A subsequent
      scheduling of T1 on P1 and T2 on P2 would match reserved_asids and get
      their generation bumped to G3:
      
      P1					P2
      --                                      --
      TTBR0.BADDR = T0
      TTBR0.ASID = A0
      asid_generation = G1
      check_and_switch_context(T1,A1,G1)
        generation match
      					check_and_switch_context(T2,A0,G0)
       				          new_context()
      					    ASID roll-over
      					    asid_generation = G2
      					    flush_context()
      					      active_asids[P1] = 0
      					      asid_map[A1] = 0
      					      reserved_asids[P1] = A0,G0
        xchg(active_asids, A1)
          active_asids[P1] = A1,G1
          xchg returns 0
        spin_lock_irqsave()
      					    allocated ASID (T2,A1,G2)
      					    asid_map[A1] = 1
      					  active_asids[P2] = A1,G2
      					...
      					check_and_switch_context(T3,A0,G0)
      					  new_context()
      					    ASID roll-over
      					    asid_generation = G3
      					    flush_context()
      					      active_asids[P1] = 0
      					      asid_map[A1] = 1
      					      reserved_asids[P1] = A1,G1
      					      reserved_asids[P2] = A1,G2
      					    allocated ASID (T3,A2,G3)
      					    asid_map[A2] = 1
      					  active_asids[P2] = A2,G3
        new_context()
          check_update_reserved_asid(A1,G1)
            matches reserved_asid[P1]
            reserved_asid[P1] = A1,G3
        updated T1 ASID to (T1,A1,G3)
      					check_and_switch_context(T2,A1,G2)
      					  new_context()
      					    check_and_switch_context(A1,G2)
      					      matches reserved_asids[P2]
      					      reserved_asids[P2] = A1,G3
      					  updated T2 ASID to (T2,A1,G3)
      
      At this point, we have two tasks, T1 and T2 both using ASID A1 with the
      latest generation G3. Any of them is allowed to be scheduled on the
      other CPU leading to two different tasks with the same ASID on the same
      CPU.
      
      This patch changes the xchg to cmpxchg so that the active_asids is only
      updated if non-zero to avoid a race with an ASID roll-over on a
      different CPU.
      
      The ASID allocation algorithm has been formally verified using the TLA+
      model checker (see
      https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/tree/asidalloc.tla
      for the spec).
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a8ffaaa0
  3. 02 1月, 2018 3 次提交
  4. 23 12月, 2017 9 次提交
  5. 12 12月, 2017 1 次提交
    • C
      Merge branch 'kpti' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux · 6aef0fdd
      Catalin Marinas 提交于
      Support for unmapping the kernel when running in userspace (aka
      "KAISER").
      
      * 'kpti' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
        arm64: kaslr: Put kernel vectors address in separate data page
        arm64: mm: Introduce TTBR_ASID_MASK for getting at the ASID in the TTBR
        perf: arm_spe: Fail device probe when arm64_kernel_unmapped_at_el0()
        arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
        arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
        arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks
        arm64: erratum: Work around Falkor erratum #E1003 in trampoline code
        arm64: entry: Hook up entry trampoline to exception vectors
        arm64: entry: Explicitly pass exception level to kernel_ventry macro
        arm64: mm: Map entry trampoline into trampoline and kernel page tables
        arm64: entry: Add exception trampoline page for exceptions from EL0
        arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
        arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
        arm64: mm: Allocate ASIDs in pairs
        arm64: mm: Fix and re-enable ARM64_SW_TTBR0_PAN
        arm64: mm: Rename post_ttbr0_update_workaround
        arm64: mm: Remove pre_ttbr0_update_workaround for Falkor erratum #E1003
        arm64: mm: Move ASID from TTBR0 to TTBR1
        arm64: mm: Temporarily disable ARM64_SW_TTBR0_PAN
        arm64: mm: Use non-global mappings for kernel space
      6aef0fdd
  6. 11 12月, 2017 17 次提交