1. 14 12月, 2018 10 次提交
  2. 13 12月, 2018 2 次提交
  3. 12 12月, 2018 4 次提交
    • W
      arm64: percpu: Fix LSE implementation of value-returning pcpu atomics · 6e4ede69
      Will Deacon 提交于
      Commit 959bf2fd ("arm64: percpu: Rewrite per-cpu ops to allow use of
      LSE atomics") introduced alternative code sequences for the arm64 percpu
      atomics, so that the LSE instructions can be patched in at runtime if
      they are supported by the CPU.
      
      Unfortunately, when patching in the LSE sequence for a value-returning
      pcpu atomic, the argument registers are the wrong way round. The
      implementation of this_cpu_add_return() therefore ends up adding
      uninitialised stack to the percpu variable and returning garbage.
      
      As it turns out, there aren't very many users of the value-returning
      percpu atomics in mainline and we only spotted this due to a failure in
      the kprobes selftests. In this case, when attempting to single-step over
      the out-of-line instruction slot, the debug monitors would not be
      enabled because calling this_cpu_inc_return() on the kernel debug
      monitor refcount would fail to detect the transition from 0. We would
      consequently execute past the slot and take an undefined instruction
      exception from the kernel, resulting in a BUG:
      
       | kernel BUG at arch/arm64/kernel/traps.c:421!
       | PREEMPT SMP
       | pc : do_undefinstr+0x268/0x278
       | lr : do_undefinstr+0x124/0x278
       | Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
       | Call trace:
       |  do_undefinstr+0x268/0x278
       |  el1_undef+0x10/0x78
       |  0xffff00000803c004
       |  init_kprobes+0x150/0x180
       |  do_one_initcall+0x74/0x178
       |  kernel_init_freeable+0x188/0x224
       |  kernel_init+0x10/0x100
       |  ret_from_fork+0x10/0x1c
      
      Fix the argument order to get the value-returning pcpu atomics working
      correctly when implemented using the LSE instructions.
      Reported-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6e4ede69
    • M
      arm64: add <asm/asm-prototypes.h> · c3296a13
      Mark Rutland 提交于
      While we can export symbols from assembly files, CONFIG_MODVERIONS requires C
      declarations of anyhting that's exported.
      
      Let's account for this as other architectures do by placing these declarations
      in <asm/asm-prototypes.h>, which kbuild will automatically use to generate
      modversion information for assembly files.
      
      Since we already define most prototypes in existing headers, we simply need to
      include those headers in <asm/asm-prototypes.h>, and don't need to duplicate
      these.
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c3296a13
    • W
      arm64: mm: Introduce MAX_USER_VA_BITS definition · 9b31cf49
      Will Deacon 提交于
      With the introduction of 52-bit virtual addressing for userspace, we are
      now in a position where the virtual addressing capability of userspace
      may exceed that of the kernel. Consequently, the VA_BITS definition
      cannot be used blindly, since it reflects only the size of kernel
      virtual addresses.
      
      This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52
      depending on whether 52-bit virtual addressing has been configured at
      build time, removing a few places where the 52 is open-coded based on
      explicit CONFIG_ guards.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9b31cf49
    • W
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 7faa313f
      Will Deacon 提交于
      Commit 39624469 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: N"kernelci.org bot" <bot@kernelci.org>
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7faa313f
  4. 11 12月, 2018 8 次提交
    • W
      arm64: smp: Rework early feature mismatched detection · 66f16a24
      Will Deacon 提交于
      Rather than add additional variables to detect specific early feature
      mismatches with secondary CPUs, we can instead dedicate the upper bits
      of the CPU boot status word to flag specific mismatches.
      
      This allows us to communicate both granule and VA-size mismatches back
      to the primary CPU without the need for additional book-keeping.
      Tested-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      66f16a24
    • W
      arm64: Kconfig: Re-jig CONFIG options for 52-bit VA · 68d23da4
      Will Deacon 提交于
      Enabling 52-bit VAs for userspace is pretty confusing, since it requires
      you to select "48-bit" virtual addressing in the Kconfig.
      
      Rework the logic so that 52-bit user virtual addressing is advertised in
      the "Virtual address space size" choice, along with some help text to
      describe its interaction with Pointer Authentication. The EXPERT-only
      option to force all user mappings to the 52-bit range is then made
      available immediately below the VA size selection.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      68d23da4
    • S
      arm64: mm: Allow forcing all userspace addresses to 52-bit · b9567720
      Steve Capper 提交于
      On arm64 52-bit VAs are provided to userspace when a hint is supplied to
      mmap. This helps maintain compatibility with software that expects at
      most 48-bit VAs to be returned.
      
      In order to help identify software that has 48-bit VA assumptions, this
      patch allows one to compile a kernel where 52-bit VAs are returned by
      default on HW that supports it.
      
      This feature is intended to be for development systems only.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b9567720
    • S
      arm64: mm: introduce 52-bit userspace support · 67e7fdfc
      Steve Capper 提交于
      On arm64 there is optional support for a 52-bit virtual address space.
      To exploit this one has to be running with a 64KB page size and be
      running on hardware that supports this.
      
      For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
      some changes are needed to support a 52-bit userspace:
       * TCR_EL1.T0SZ needs to be 12 instead of 16,
       * TASK_SIZE needs to reflect the new size.
      
      This patch implements the above when the support for 52-bit VAs is
      detected at early boot time.
      
      On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
      well as userspace, TTBR0_EL1 controls:
       * The identity mapping,
       * EFI runtime code.
      
      It is possible to run a kernel with an identity mapping that has a
      larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
      would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
      52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
      12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
      disabled.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      67e7fdfc
    • S
      arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD · e842dfb5
      Steve Capper 提交于
      Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64
      entries (for the 48-bit case) to 1024 entries. This quantity,
      PTRS_PER_PGD is used as follows to compute which PGD entry corresponds
      to a given virtual address, addr:
      
      pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1)
      
      Userspace addresses are prefixed by 0's, so for a 48-bit userspace
      address, uva, the following is true:
      (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words, a 48-bit userspace address will have the same pgd_index
      when using PTRS_PER_PGD = 64 and 1024.
      
      Kernel addresses are prefixed by 1's so, given a 48-bit kernel address,
      kva, we have the following inequality:
      (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1)
      
      In other words a 48-bit kernel virtual address will have a different
      pgd_index when using PTRS_PER_PGD = 64 and 1024.
      
      If, however, we note that:
      kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b)
      and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE)
      
      We can consider:
      (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1)
       = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F	// "lower" cancels out
       = 0x3C0
      
      In other words, one can switch PTRS_PER_PGD to the 52-bit value globally
      provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when
      running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16).
      
      For kernel configuration where 52-bit userspace VAs are possible, this
      patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the
      52-bit value.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Suggested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      [will: added comment to TTBR1_BADDR_4852_OFFSET calculation]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e842dfb5
    • S
      arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base · e5d99157
      Steve Capper 提交于
      Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
      and arch_get_mmap_base helpers to allow for high addresses in mmap.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e5d99157
    • S
      arm64: mm: Introduce DEFAULT_MAP_WINDOW · 363524d2
      Steve Capper 提交于
      We wish to introduce a 52-bit virtual address space for userspace but
      maintain compatibility with software that assumes the maximum VA space
      size is 48 bit.
      
      In order to achieve this, on 52-bit VA systems, we make mmap behave as
      if it were running on a 48-bit VA system (unless userspace explicitly
      requests a VA where addr[51:48] != 0).
      
      On a system running a 52-bit userspace we need TASK_SIZE to represent
      the 52-bit limit as it is used in various places to distinguish between
      kernelspace and userspace addresses.
      
      Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses
      TTBR0) to represent the non-extended VA space.
      
      This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and
      switches the appropriate logic to use that instead of TASK_SIZE.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      363524d2
    • Q
      arm64: kasan: Increase stack size for KASAN_EXTRA · 6e883067
      Qian Cai 提交于
      If the kernel is configured with KASAN_EXTRA, the stack size is
      increased significantly due to setting the GCC -fstack-reuse option to
      "none" [1]. As a result, it can trigger a stack overrun quite often with
      32k stack size compiled using GCC 8. For example, this reproducer
      
        https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise06.c
      
      can trigger a "corrupted stack end detected inside scheduler" very
      reliably with CONFIG_SCHED_STACK_END_CHECK enabled. There are other
      reports at:
      
        https://lore.kernel.org/lkml/1542144497.12945.29.camel@gmx.us/
        https://lore.kernel.org/lkml/721E7B42-2D55-4866-9C1A-3E8D64F33F9C@gmx.us/
      
      There are just too many functions that could have a large stack with
      KASAN_EXTRA due to large local variables that have been called over and
      over again without being able to reuse the stacks. Some noticiable ones
      are,
      
      size
      7536 shrink_inactive_list
      7440 shrink_page_list
      6560 fscache_stats_show
      3920 jbd2_journal_commit_transaction
      3216 try_to_unmap_one
      3072 migrate_page_move_mapping
      3584 migrate_misplaced_transhuge_page
      3920 ip_vs_lblcr_schedule
      4304 lpfc_nvme_info_show
      3888 lpfc_debugfs_nvmestat_data.constprop
      
      There are other 49 functions over 2k in size while compiling kernel with
      "-Wframe-larger-than=" on this machine. Hence, it is too much work to
      change Makefiles for each object to compile without
      -fsanitize-address-use-after-scope individually.
      
      [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715#c23Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6e883067
  5. 10 12月, 2018 6 次提交
  6. 08 12月, 2018 4 次提交
    • W
      arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint · 42305099
      Will Deacon 提交于
      The "L" AArch64 machine constraint, which we use for the "old" value in
      an LL/SC cmpxchg(), generates an immediate that is suitable for a 64-bit
      logical instruction. However, for cmpxchg() operations on types smaller
      than 64 bits, this constraint can result in an invalid instruction which
      is correctly rejected by GAS, such as EOR W1, W1, #0xffffffff.
      
      Whilst we could special-case the constraint based on the cmpxchg size,
      it's far easier to change the constraint to "K" and put up with using
      a register for large 64-bit immediates. For out-of-line LL/SC atomics,
      this is all moot anyway.
      Reported-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      42305099
    • W
      arm64: percpu: Rewrite per-cpu ops to allow use of LSE atomics · 959bf2fd
      Will Deacon 提交于
      Our percpu code is a bit of an inconsistent mess:
      
        * It rolls its own xchg(), but reuses cmpxchg_local()
        * It uses various different flavours of preempt_{enable,disable}()
        * It returns values even for the non-returning RmW operations
        * It makes no use of LSE atomics outside of the cmpxchg() ops
        * There are individual macros for different sizes of access, but these
          are all funneled through a switch statement rather than dispatched
          directly to the relevant case
      
      This patch rewrites the per-cpu operations to address these shortcomings.
      Whilst the new code is a lot cleaner, the big advantage is that we can
      use the non-returning ST- atomic instructions when we have LSE.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      959bf2fd
    • W
      arm64: Avoid masking "old" for LSE cmpxchg() implementation · b4f9209b
      Will Deacon 提交于
      The CAS instructions implicitly access only the relevant bits of the "old"
      argument, so there is no need for explicit masking via type-casting as
      there is in the LL/SC implementation.
      
      Move the casting into the LL/SC code and remove it altogether for the LSE
      implementation.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b4f9209b
    • W
      arm64: Avoid redundant type conversions in xchg() and cmpxchg() · 5ef3fe4c
      Will Deacon 提交于
      Our atomic instructions (either LSE atomics of LDXR/STXR sequences)
      natively support byte, half-word, word and double-word memory accesses
      so there is no need to mask the data register prior to being stored.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5ef3fe4c
  7. 07 12月, 2018 4 次提交
  8. 06 12月, 2018 2 次提交