1. 18 3月, 2020 7 次提交
  2. 17 1月, 2020 1 次提交
  3. 27 12月, 2019 1 次提交
  4. 21 12月, 2019 1 次提交
  5. 05 12月, 2019 1 次提交
    • W
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 64694b27
      Will Deacon 提交于
      [ Upstream commit 7faa313f05cad184e8b17750f0cbe5216ac6debb ]
      
      Commit 396244692232 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: N"kernelci.org bot" <bot@kernelci.org>
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      64694b27
  6. 01 12月, 2019 1 次提交
  7. 13 11月, 2019 1 次提交
  8. 06 11月, 2019 2 次提交
  9. 12 10月, 2019 5 次提交
  10. 08 10月, 2019 1 次提交
  11. 05 10月, 2019 3 次提交
    • W
      arm64: tlb: Ensure we execute an ISB following walk cache invalidation · 8cfe3b8a
      Will Deacon 提交于
      commit 51696d346c49c6cf4f29e9b20d6e15832a2e3408 upstream.
      
      05f2d2f8 ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
      added a new TLB invalidation helper which is used when freeing
      intermediate levels of page table used for kernel mappings, but is
      missing the required ISB instruction after completion of the TLBI
      instruction.
      
      Add the missing barrier.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 05f2d2f8 ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      8cfe3b8a
    • W
      Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}" · fc7d6bfd
      Will Deacon 提交于
      commit d0b7a302d58abe24ed0f32a0672dd4c356bb73db upstream.
      
      This reverts commit 24fe1b0e.
      
      Commit 24fe1b0e ("arm64: Remove unnecessary ISBs from
      set_{pte,pmd,pud}") removed ISB instructions immediately following updates
      to the page table, on the grounds that they are not required by the
      architecture and a DSB alone is sufficient to ensure that subsequent data
      accesses use the new translation:
      
        DDI0487E_a, B2-128:
      
        | ... no instruction that appears in program order after the DSB
        | instruction can alter any state of the system or perform any part of
        | its functionality until the DSB completes other than:
        |
        | * Being fetched from memory and decoded
        | * Reading the general-purpose, SIMD and floating-point,
        |   Special-purpose, or System registers that are directly or indirectly
        |   read without causing side-effects.
      
      However, the same document also states the following:
      
        DDI0487E_a, B2-125:
      
        | DMB and DSB instructions affect reads and writes to the memory system
        | generated by Load/Store instructions and data or unified cache
        | maintenance instructions being executed by the PE. Instruction fetches
        | or accesses caused by a hardware translation table access are not
        | explicit accesses.
      
      which appears to claim that the DSB alone is insufficient.  Unfortunately,
      some CPU designers have followed the second clause above, whereas in Linux
      we've been relying on the first. This means that our mapping sequence:
      
      	MOV	X0, <valid pte>
      	STR	X0, [Xptep]	// Store new PTE to page table
      	DSB	ISHST
      	LDR	X1, [X2]	// Translates using the new PTE
      
      can actually raise a translation fault on the load instruction because the
      translation can be performed speculatively before the page table update and
      then marked as "faulting" by the CPU. For user PTEs, this is ok because we
      can handle the spurious fault, but for kernel PTEs and intermediate table
      entries this results in a panic().
      
      Revert the offending commit to reintroduce the missing barriers.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 24fe1b0e ("arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}")
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fc7d6bfd
    • Q
      arm64/prefetch: fix a -Wtype-limits warning · 7d75275f
      Qian Cai 提交于
      [ Upstream commit b99286b088ea843b935dcfb29f187697359fe5cd ]
      
      The commit d5370f75 ("arm64: prefetch: add alternative pattern for
      CPUs without a prefetcher") introduced MIDR_IS_CPU_MODEL_RANGE() to be
      used in has_no_hw_prefetch() with rv_min=0 which generates a compilation
      warning from GCC,
      
      In file included from ./arch/arm64/include/asm/cache.h:8,
                     from ./include/linux/cache.h:6,
                     from ./include/linux/printk.h:9,
                     from ./include/linux/kernel.h:15,
                     from ./include/linux/cpumask.h:10,
                     from arch/arm64/kernel/cpufeature.c:11:
      arch/arm64/kernel/cpufeature.c: In function 'has_no_hw_prefetch':
      ./arch/arm64/include/asm/cputype.h:59:26: warning: comparison of
      unsigned expression >= 0 is always true [-Wtype-limits]
      _model == (model) && rv >= (rv_min) && rv <= (rv_max);  \
                              ^~
      arch/arm64/kernel/cpufeature.c:889:9: note: in expansion of macro
      'MIDR_IS_CPU_MODEL_RANGE'
      return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX,
             ^~~~~~~~~~~~~~~~~~~~~~~
      
      Fix it by converting MIDR_IS_CPU_MODEL_RANGE to a static inline
      function.
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NWill Deacon <will@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      7d75275f
  12. 25 8月, 2019 2 次提交
  13. 07 8月, 2019 1 次提交
  14. 04 8月, 2019 1 次提交
  15. 31 7月, 2019 1 次提交
  16. 03 7月, 2019 3 次提交
  17. 22 6月, 2019 2 次提交
  18. 31 5月, 2019 3 次提交
    • V
      arm64: vdso: Fix clock_getres() for CLOCK_REALTIME · b0f6ac8c
      Vincenzo Frascino 提交于
      [ Upstream commit 81fb8736dd81da3fe94f28968dac60f392ec6746 ]
      
      clock_getres() in the vDSO library has to preserve the same behaviour
      of posix_get_hrtimer_res().
      
      In particular, posix_get_hrtimer_res() does:
      
          sec = 0;
          ns = hrtimer_resolution;
      
      where 'hrtimer_resolution' depends on whether or not high resolution
      timers are enabled, which is a runtime decision.
      
      The vDSO incorrectly returns the constant CLOCK_REALTIME_RES. Fix this
      by exposing 'hrtimer_resolution' in the vDSO datapage and returning that
      instead.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      [will: Use WRITE_ONCE(), move adr off COARSE path, renumber labels, use 'w' reg]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      b0f6ac8c
    • Q
      arm64: Fix compiler warning from pte_unmap() with -Wunused-but-set-variable · efa336f7
      Qian Cai 提交于
      [ Upstream commit 74dd022f9e6260c3b5b8d15901d27ebcc5f21eda ]
      
      When building with -Wunused-but-set-variable, the compiler shouts about
      a number of pte_unmap() users, since this expands to an empty macro on
      arm64:
      
        | mm/gup.c: In function 'gup_pte_range':
        | mm/gup.c:1727:16: warning: variable 'ptem' set but not used
        | [-Wunused-but-set-variable]
        | mm/gup.c: At top level:
        | mm/memory.c: In function 'copy_pte_range':
        | mm/memory.c:821:24: warning: variable 'orig_dst_pte' set but not used
        | [-Wunused-but-set-variable]
        | mm/memory.c:821:9: warning: variable 'orig_src_pte' set but not used
        | [-Wunused-but-set-variable]
        | mm/swap_state.c: In function 'swap_ra_info':
        | mm/swap_state.c:641:15: warning: variable 'orig_pte' set but not used
        | [-Wunused-but-set-variable]
        | mm/madvise.c: In function 'madvise_free_pte_range':
        | mm/madvise.c:318:9: warning: variable 'orig_pte' set but not used
        | [-Wunused-but-set-variable]
      
      Rewrite pte_unmap() as a static inline function, which silences the
      warnings.
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      efa336f7
    • W
      arm64: errata: Add workaround for Cortex-A76 erratum #1463225 · 2eefb4a3
      Will Deacon 提交于
      commit 969f5ea627570e91c9d54403287ee3ed657f58fe upstream.
      
      Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum
      that can prevent interrupts from being taken when single-stepping.
      
      This patch implements a software workaround to prevent userspace from
      effectively being able to disable interrupts.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      2eefb4a3
  19. 22 5月, 2019 2 次提交
  20. 10 5月, 2019 1 次提交
    • W
      arm64: futex: Bound number of LDXR/STXR loops in FUTEX_WAKE_OP · 9ccdbde1
      Will Deacon 提交于
      commit 03110a5cb2161690ae5ac04994d47ed0cd6cef75 upstream.
      
      Our futex implementation makes use of LDXR/STXR loops to perform atomic
      updates to user memory from atomic context. This can lead to latency
      problems if we end up spinning around the LL/SC sequence at the expense
      of doing something useful.
      
      Rework our futex atomic operations so that we return -EAGAIN if we fail
      to update the futex word after 128 attempts. The core futex code will
      reschedule if necessary and we'll try again later.
      
      Cc: <stable@kernel.org>
      Fixes: 6170a974 ("arm64: Atomic operations")
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9ccdbde1