1. 15 8月, 2019 1 次提交
    • M
      arm64: memory: rename VA_START to PAGE_END · 77ad4ce6
      Mark Rutland 提交于
      Prior to commit:
      
        14c127c9 ("arm64: mm: Flip kernel VA space")
      
      ... VA_START described the start of the TTBR1 address space for a given
      VA size described by VA_BITS, where all kernel mappings began.
      
      Since that commit, VA_START described a portion midway through the
      address space, where the linear map ends and other kernel mappings
      begin.
      
      To avoid confusion, let's rename VA_START to PAGE_END, making it clear
      that it's not the start of the TTBR1 address space and implying that
      it's related to PAGE_OFFSET. Comments and other mnemonics are updated
      accordingly, along with a typo fix in the decription of VMEMMAP_SIZE.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Tested-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      77ad4ce6
  2. 14 8月, 2019 8 次提交
  3. 09 8月, 2019 9 次提交
    • W
      arm64: mm: Simplify definition of virt_addr_valid() · d2d73d2f
      Will Deacon 提交于
      _virt_addr_valid() is defined as the same value in two places and rolls
      its own version of virt_to_pfn() in both cases.
      
      Consolidate these definitions by inlining a simplified version directly
      into virt_addr_valid().
      Signed-off-by: NWill Deacon <will@kernel.org>
      d2d73d2f
    • S
      arm64: mm: Remove vabits_user · 2c624fe6
      Steve Capper 提交于
      Previous patches have enabled 52-bit kernel + user VAs and there is no
      longer any scenario where user VA != kernel VA size.
      
      This patch removes the, now redundant, vabits_user variable and replaces
      usage with vabits_actual where appropriate.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      2c624fe6
    • S
      arm64: mm: Introduce 52-bit Kernel VAs · b6d00d47
      Steve Capper 提交于
      Most of the machinery is now in place to enable 52-bit kernel VAs that
      are detectable at boot time.
      
      This patch adds a Kconfig option for 52-bit user and kernel addresses
      and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
      physvirt_offset and vmemmap at early boot.
      
      To simplify things this patch also removes the 52-bit user/48-bit kernel
      kconfig option.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b6d00d47
    • S
      arm64: mm: Modify calculation of VMEMMAP_SIZE · ce3aaed8
      Steve Capper 提交于
      In a later patch we will need to have a slightly larger VMEMMAP region
      to accommodate boot time selection between 48/52-bit kernel VAs.
      
      This patch modifies the formula for computing VMEMMAP_SIZE to depend
      explicitly on the PAGE_OFFSET and start of kernel addressable memory.
      (This allows for a slightly larger direct linear map in future).
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      ce3aaed8
    • S
      arm64: mm: Introduce vabits_actual · 5383cc6e
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, one
      needs to know the actual VA_BITS detected. A new variable vabits_actual
      is introduced in this commit and employed for the KVM hypervisor layout,
      KASAN, fault handling and phys-to/from-virt translation where there
      would normally be compile time constants.
      
      In order to maintain performance in phys_to_virt, another variable
      physvirt_offset is introduced.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      5383cc6e
    • S
      arm64: mm: Introduce VA_BITS_MIN · 90ec95cd
      Steve Capper 提交于
      In order to support 52-bit kernel addresses detectable at boot time, the
      kernel needs to know the most conservative VA_BITS possible should it
      need to fall back to this quantity due to lack of hardware support.
      
      A new compile time constant VA_BITS_MIN is introduced in this patch and
      it is employed in the KASAN end address, KASLR, and EFI stub.
      
      For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.
      
      In other words: VA_BITS_MIN = min (48, VA_BITS)
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      90ec95cd
    • S
      arm64: kasan: Switch to using KASAN_SHADOW_OFFSET · 6bd1d0be
      Steve Capper 提交于
      KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
      line argument and affects the codegen of the inline address sanetiser.
      
      Essentially, for an example memory access:
          *ptr1 = val;
      The compiler will insert logic similar to the below:
          shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
          if (somethingWrong(shadowValue))
              flagAnError();
      
      This code sequence is inserted into many places, thus
      KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
      text.
      
      If we want to run a single kernel binary with multiple address spaces,
      then we need to do this with KASAN_SHADOW_OFFSET fixed.
      
      Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
      shadow addresses we know that the end of the shadow region is constant
      w.r.t. VA space size:
          KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET
      
      This means that if we increase the size of the VA space, the start of
      the KASAN region expands into lower addresses whilst the end of the
      KASAN region is fixed.
      
      Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
      build scripts with the VA size used as a parameter. (There are build
      time checks in the C code too to ensure that expected values are being
      derived). It is sufficient, and indeed is a simplification, to remove
      the build scripts (and build time checks) entirely and instead provide
      KASAN_SHADOW_OFFSET values.
      
      This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
      arm64 Makefile, and instead we adopt the approach used by x86 to supply
      offset values in kConfig. To help debug/develop future VA space changes,
      the Makefile logic has been preserved in a script file in the arm64
      Documentation folder.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      6bd1d0be
    • S
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper 提交于
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      14c127c9
    • S
      arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START · 9cb1c5dd
      Steve Capper 提交于
      Currently there are assumptions about the alignment of VMEMMAP_START
      and PAGE_OFFSET that won't be valid after this series is applied.
      
      These assumptions are in the form of bitwise operators being used
      instead of addition and subtraction when calculating addresses.
      
      This patch replaces these bitwise operators with addition/subtraction.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      9cb1c5dd
  4. 01 8月, 2019 1 次提交
    • Q
      arm64/mm: fix variable 'tag' set but not used · 7732d20a
      Qian Cai 提交于
      When CONFIG_KASAN_SW_TAGS=n, set_tag() is compiled away. GCC throws a
      warning,
      
      mm/kasan/common.c: In function '__kasan_kmalloc':
      mm/kasan/common.c:464:5: warning: variable 'tag' set but not used
      [-Wunused-but-set-variable]
        u8 tag = 0xff;
           ^~~
      
      Fix it by making __tag_set() a static inline function the same as
      arch_kasan_set_tag() in mm/kasan/kasan.h for consistency because there
      is a macro in arch/arm64/include/asm/kasan.h,
      
       #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag)
      
      However, when CONFIG_DEBUG_VIRTUAL=n and CONFIG_SPARSEMEM_VMEMMAP=y,
      page_to_virt() will call __tag_set() with incorrect type of a
      parameter, so fix that as well. Also, still let page_to_virt() return
      "void *" instead of "const void *", so will not need to add a similar
      cast in lowmem_page_address().
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NWill Deacon <will@kernel.org>
      7732d20a
  5. 19 6月, 2019 1 次提交
  6. 15 5月, 2019 1 次提交
  7. 16 4月, 2019 1 次提交
    • M
      arm64: mm: check virtual addr in virt_to_page() if CONFIG_DEBUG_VIRTUAL=y · eea1bb22
      Miles Chen 提交于
      This change uses the original virt_to_page() (the one with __pa()) to
      check the given virtual address if CONFIG_DEBUG_VIRTUAL=y.
      
      Recently, I worked on a bug: a driver passes a symbol address to
      dma_map_single() and the virt_to_page() (called by dma_map_single())
      does not work for non-linear addresses after commit 9f287591
      ("arm64: mm: restrict virt_to_page() to the linear mapping").
      
      I tried to trap the bug by enabling CONFIG_DEBUG_VIRTUAL but it
      did not work - bacause the commit removes the __pa() from
      virt_to_page() but CONFIG_DEBUG_VIRTUAL checks the virtual address
      in __pa()/__virt_to_phys().
      
      A simple solution is to use the original virt_to_page()
      (the one with__pa()) if CONFIG_DEBUG_VIRTUAL=y.
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMiles Chen <miles.chen@mediatek.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      eea1bb22
  8. 06 3月, 2019 1 次提交
  9. 01 3月, 2019 1 次提交
  10. 16 2月, 2019 1 次提交
    • A
      arm64, mm, efi: Account for GICv3 LPI tables in static memblock reserve table · 8a5b403d
      Ard Biesheuvel 提交于
      In the irqchip and EFI code, we have what basically amounts to a quirk
      to work around a peculiarity in the GICv3 architecture, which permits
      the system memory address of LPI tables to be programmable only once
      after a CPU reset. This means kexec kernels must use the same memory
      as the first kernel, and thus ensure that this memory has not been
      given out for other purposes by the time the ITS init code runs, which
      is not very early for secondary CPUs.
      
      On systems with many CPUs, these reservations could overflow the
      memblock reservation table, and this was addressed in commit:
      
        eff89628 ("efi/arm: Defer persistent reservations until after paging_init()")
      
      However, this turns out to have made things worse, since the allocation
      of page tables and heap space for the resized memblock reservation table
      itself may overwrite the regions we are attempting to reserve, which may
      cause all kinds of corruption, also considering that the ITS will still
      be poking bits into that memory in response to incoming MSIs.
      
      So instead, let's grow the static memblock reservation table on such
      systems so it can accommodate these reservations at an earlier time.
      This will permit us to revert the above commit in a subsequent patch.
      
      [ mingo: Minor cleanups. ]
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/20190215123333.21209-2-ard.biesheuvel@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8a5b403d
  11. 29 12月, 2018 5 次提交
  12. 15 12月, 2018 1 次提交
  13. 14 12月, 2018 1 次提交
    • M
      arm64: expose user PAC bit positions via ptrace · ec6e822d
      Mark Rutland 提交于
      When pointer authentication is in use, data/instruction pointers have a
      number of PAC bits inserted into them. The number and position of these
      bits depends on the configured TCR_ELx.TxSZ and whether tagging is
      enabled. ARMv8.3 allows tagging to differ for instruction and data
      pointers.
      
      For userspace debuggers to unwind the stack and/or to follow pointer
      chains, they need to be able to remove the PAC bits before attempting to
      use a pointer.
      
      This patch adds a new structure with masks describing the location of
      the PAC bits in userspace instruction and data pointers (i.e. those
      addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
      By clearing these bits from pointers (and replacing them with the value
      of bit 55), userspace can acquire the PAC-less versions.
      
      This new regset is exposed when the kernel is built with (user) pointer
      authentication support, and the address authentication feature is
      enabled. Otherwise, the regset is hidden.
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      [will: Fix to use vabits_user instead of VA_BITS and rename macro]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ec6e822d
  14. 12 12月, 2018 1 次提交
    • W
      arm64: mm: Introduce MAX_USER_VA_BITS definition · 9b31cf49
      Will Deacon 提交于
      With the introduction of 52-bit virtual addressing for userspace, we are
      now in a position where the virtual addressing capability of userspace
      may exceed that of the kernel. Consequently, the VA_BITS definition
      cannot be used blindly, since it reflects only the size of kernel
      virtual addresses.
      
      This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52
      depending on whether 52-bit virtual addressing has been configured at
      build time, removing a few places where the 52 is open-coded based on
      explicit CONFIG_ guards.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9b31cf49
  15. 11 12月, 2018 1 次提交
  16. 05 12月, 2018 1 次提交
    • A
      arm64/bpf: don't allocate BPF JIT programs in module memory · 91fc957c
      Ard Biesheuvel 提交于
      The arm64 module region is a 128 MB region that is kept close to
      the core kernel, in order to ensure that relative branches are
      always in range. So using the same region for programs that do
      not have this restriction is wasteful, and preferably avoided.
      
      Now that the core BPF JIT code permits the alloc/free routines to
      be overridden, implement them by vmalloc()/vfree() calls from a
      dedicated 128 MB region set aside for BPF programs. This ensures
      that BPF programs are still in branching range of each other, which
      is something the JIT currently depends upon (and is not guaranteed
      when using module_alloc() on KASLR kernels like we do currently).
      It also ensures that placement of BPF programs does not correlate
      with the placement of the core kernel or modules, making it less
      likely that leaking the former will reveal the latter.
      
      This also solves an issue under KASAN, where shadow memory is
      needlessly allocated for all BPF programs (which don't require KASAN
      shadow pages since they are not KASAN instrumented)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      91fc957c
  17. 27 11月, 2018 1 次提交
  18. 09 7月, 2018 1 次提交
  19. 12 4月, 2018 1 次提交
  20. 07 2月, 2018 1 次提交
  21. 05 10月, 2017 1 次提交
    • M
      arm64: Use larger stacks when KASAN is selected · b02faed1
      Mark Rutland 提交于
      AddressSanitizer instrumentation can significantly bloat the stack, and
      with GCC 7 this can result in stack overflows at boot time in some
      configurations.
      
      We can avoid this by doubling our stack size when KASAN is in use, as is
      already done on x86 (and has been since KASAN was introduced).
      Regardless of other patches to decrease KASAN's stack utilization,
      kernels built with KASAN will always require more stack space than those
      built without, and we should take this into account.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b02faed1