1. 29 7月, 2021 1 次提交
  2. 15 6月, 2021 1 次提交
  3. 07 6月, 2021 1 次提交
  4. 27 5月, 2021 1 次提交
  5. 14 4月, 2021 2 次提交
  6. 25 3月, 2021 1 次提交
  7. 13 1月, 2021 1 次提交
    • C
      arm64: Remove arm64_dma32_phys_limit and its uses · d78050ee
      Catalin Marinas 提交于
      With the introduction of a dynamic ZONE_DMA range based on DT or IORT
      information, there's no need for CMA allocations from the wider
      ZONE_DMA32 since on most platforms ZONE_DMA will cover the 32-bit
      addressable range. Remove the arm64_dma32_phys_limit and set
      arm64_dma_phys_limit to cover the smallest DMA range required on the
      platform. CMA allocation and crashkernel reservation now go in the
      dynamically sized ZONE_DMA, allowing correct functionality on RPi4.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Zhou <chenzhou10@huawei.com>
      Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> # On RPi4B
      d78050ee
  8. 04 1月, 2021 1 次提交
  9. 23 12月, 2020 1 次提交
  10. 03 12月, 2020 1 次提交
    • M
      arm64: uaccess: remove set_fs() · 3d2403fd
      Mark Rutland 提交于
      Now that the uaccess primitives dont take addr_limit into account, we
      have no need to manipulate this via set_fs() and get_fs(). Remove
      support for these, along with some infrastructure this renders
      redundant.
      
      We no longer need to flip UAO to access kernel memory under KERNEL_DS,
      and head.S unconditionally clears UAO for all kernel configurations via
      an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
      nor do we need to context-switch it. However, we still need to adjust
      PAN during SDEI entry.
      
      Masking of __user pointers no longer needs to use the dynamic value of
      addr_limit, and can use a constant derived from the maximum possible
      userspace task size. A new TASK_SIZE_MAX constant is introduced for
      this, which is also used by core code. In configurations supporting
      52-bit VAs, this may include a region of unusable VA space above a
      48-bit TTBR0 limit, but never includes any portion of TTBR1.
      
      Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
      KERNEL_DS were inclusive limits, and is converted to a mask by
      subtracting one.
      
      As the SDEI entry code repurposes the otherwise unnecessary
      pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted
      context, for now we rename that to pt_regs::sdei_ttbr1. In future we can
      consider factoring that out.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NJames Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3d2403fd
  11. 29 9月, 2020 3 次提交
    • W
      arm64: Rewrite Spectre-v4 mitigation code · c2876207
      Will Deacon 提交于
      Rewrite the Spectre-v4 mitigation handling code to follow the same
      approach as that taken by Spectre-v2.
      
      For now, report to KVM that the system is vulnerable (by forcing
      'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in
      subsequent steps.
      Signed-off-by: NWill Deacon <will@kernel.org>
      c2876207
    • W
      arm64: Group start_thread() functions together · a8de9498
      Will Deacon 提交于
      The is_ttbrX_addr() functions have somehow ended up in the middle of
      the start_thread() functions, so move them out of the way to keep the
      code readable.
      Signed-off-by: NWill Deacon <will@kernel.org>
      a8de9498
    • W
      arm64: Rewrite Spectre-v2 mitigation code · d4647f0a
      Will Deacon 提交于
      The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain.
      This is largely due to it being written hastily, without much clue as to
      how things would pan out, and also because it ends up mixing policy and
      state in such a way that it is very difficult to figure out what's going
      on.
      
      Rewrite the Spectre-v2 mitigation so that it clearly separates state from
      policy and follows a more structured approach to handling the mitigation.
      Signed-off-by: NWill Deacon <will@kernel.org>
      d4647f0a
  12. 04 9月, 2020 3 次提交
    • C
      arm64: mte: Allow {set,get}_tagged_addr_ctrl() on non-current tasks · 93f067f6
      Catalin Marinas 提交于
      In preparation for ptrace() access to the prctl() value, allow calling
      these functions on non-current tasks.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      93f067f6
    • C
      arm64: mte: Allow user control of the generated random tags via prctl() · af5ce952
      Catalin Marinas 提交于
      The IRG, ADDG and SUBG instructions insert a random tag in the resulting
      address. Certain tags can be excluded via the GCR_EL1.Exclude bitmap
      when, for example, the user wants a certain colour for freed buffers.
      Since the GCR_EL1 register is not accessible at EL0, extend the
      prctl(PR_SET_TAGGED_ADDR_CTRL) interface to include a 16-bit field in
      the first argument for controlling which tags can be generated by the
      above instruction (an include rather than exclude mask). Note that by
      default all non-zero tags are excluded. This setting is per-thread.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      af5ce952
    • C
      arm64: mte: Allow user control of the tag check mode via prctl() · 1c101da8
      Catalin Marinas 提交于
      By default, even if PROT_MTE is set on a memory range, there is no tag
      check fault reporting (SIGSEGV). Introduce a set of option to the
      exiting prctl(PR_SET_TAGGED_ADDR_CTRL) to allow user control of the tag
      check fault mode:
      
        PR_MTE_TCF_NONE  - no reporting (default)
        PR_MTE_TCF_SYNC  - synchronous tag check fault reporting
        PR_MTE_TCF_ASYNC - asynchronous tag check fault reporting
      
      These options translate into the corresponding SCTLR_EL1.TCF0 bitfield,
      context-switched by the kernel. Note that the kernel accesses to the
      user address space (e.g. read() system call) are not checked if the user
      thread tag checking mode is PR_MTE_TCF_NONE or PR_MTE_TCF_ASYNC. If the
      tag checking mode is PR_MTE_TCF_SYNC, the kernel makes a best effort to
      check its user address accesses, however it cannot always guarantee it.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      1c101da8
  13. 21 3月, 2020 1 次提交
  14. 18 3月, 2020 2 次提交
  15. 06 11月, 2019 1 次提交
    • B
      arm64: mm: Remove MAX_USER_VA_BITS definition · 218564b1
      Bhupesh Sharma 提交于
      commit 9b31cf49 ("arm64: mm: Introduce MAX_USER_VA_BITS definition")
      introduced the MAX_USER_VA_BITS definition, which was used to support
      the arm64 mm use-cases where the user-space could use 52-bit virtual
      addresses whereas the kernel-space would still could a maximum of 48-bit
      virtual addressing.
      
      But, now with commit b6d00d47 ("arm64: mm: Introduce 52-bit Kernel
      VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence
      there is no longer any scenario where user VA != kernel VA size
      (even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true).
      
      Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to
      VA_BITS (maximum VA space size) in all possible use-cases. Note that
      even though the 'vabits_actual' value would be 48 for arm64 hardware
      which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52
      is enabled), VA_BITS would still be set to a value 52. Hence this change
      would be safe in all possible VA address space combinations.
      
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: linux-kernel@vger.kernel.org
      Cc: kexec@lists.infradead.org
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NBhupesh Sharma <bhsharma@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      218564b1
  16. 28 10月, 2019 1 次提交
    • J
      arm64: entry-common: don't touch daif before bp-hardening · bfe29874
      James Morse 提交于
      The previous patches mechanically transformed the assembly version of
      entry.S to entry-common.c for synchronous exceptions.
      
      The C version of local_daif_restore() doesn't quite do the same thing
      as the assembly versions if pseudo-NMI is in use. In particular,
      | local_daif_restore(DAIF_PROCCTX_NOIRQ)
      will still allow pNMI to be delivered. This is not the behaviour
      do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not
      be possible for the PMU handler to run as an NMI until the bp-hardening
      sequence has run.
      
      The bp-hardening calls were placed where they are because this was the
      first C code to run after the relevant exceptions. As we've now moved
      that point earlier, move the checks and calls earlier too.
      
      This makes it clearer that this stuff runs before any kind of exception,
      and saves modifying PSTATE twice.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bfe29874
  17. 25 9月, 2019 1 次提交
  18. 09 8月, 2019 2 次提交
  19. 07 8月, 2019 1 次提交
  20. 05 8月, 2019 1 次提交
    • M
      arm64: remove pointless __KERNEL__ guards · b907b80d
      Mark Rutland 提交于
      For a number of years, UAPI headers have been split from kernel-internal
      headers. The latter are never exposed to userspace, and always built
      with __KERNEL__ defined.
      
      Most headers under arch/arm64 don't have __KERNEL__ guards, but there
      are a few stragglers lying around. To make things more consistent, and
      to set a good example going forward, let's remove these redundant
      __KERNEL__ guards.
      
      In a couple of cases, a trailing #endif lacked a comment describing its
      corresponding #if or #ifdef, so these are fixes up at the same time.
      
      Guards in auto-generated crypto code are left as-is, as these guards are
      generated by scripting imported from the upstream openssl project
      scripts. Guards in UAPI headers are left as-is, as these can be included
      by userspace or the kernel.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b907b80d
  21. 22 7月, 2019 1 次提交
  22. 19 6月, 2019 1 次提交
  23. 30 4月, 2019 1 次提交
  24. 11 4月, 2019 1 次提交
    • V
      arm64: compat: Reduce address limit · d2631193
      Vincenzo Frascino 提交于
      Currently, compat tasks running on arm64 can allocate memory up to
      TASK_SIZE_32 (UL(0x100000000)).
      
      This means that mmap() allocations, if we treat them as returning an
      array, are not compliant with the sections 6.5.8 of the C standard
      (C99) which states that: "If the expression P points to an element of
      an array object and the expression Q points to the last element of the
      same array object, the pointer expression Q+1 compares greater than P".
      
      Redefine TASK_SIZE_32 to address the issue.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: <stable@vger.kernel.org>
      Reported-by: NJann Horn <jannh@google.com>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      [will: fixed typo in comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d2631193
  25. 06 2月, 2019 1 次提交
  26. 14 12月, 2018 3 次提交
    • W
      arm64: ptr auth: Move per-thread keys from thread_info to thread_struct · 84931327
      Will Deacon 提交于
      We don't need to get at the per-thread keys from assembly at all, so
      they can live alongside the rest of the per-thread register state in
      thread_struct instead of thread_info.
      
      This will also allow straighforward whitelisting of the keys for
      hardened usercopy should we expose them via a ptrace request later on.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      84931327
    • K
      arm64: add prctl control for resetting ptrauth keys · ba830885
      Kristina Martsenko 提交于
      Add an arm64-specific prctl to allow a thread to reinitialize its
      pointer authentication keys to random values. This can be useful when
      exec() is not used for starting new processes, to ensure that different
      processes still have different keys.
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ba830885
    • M
      arm64: expose user PAC bit positions via ptrace · ec6e822d
      Mark Rutland 提交于
      When pointer authentication is in use, data/instruction pointers have a
      number of PAC bits inserted into them. The number and position of these
      bits depends on the configured TCR_ELx.TxSZ and whether tagging is
      enabled. ARMv8.3 allows tagging to differ for instruction and data
      pointers.
      
      For userspace debuggers to unwind the stack and/or to follow pointer
      chains, they need to be able to remove the PAC bits before attempting to
      use a pointer.
      
      This patch adds a new structure with masks describing the location of
      the PAC bits in userspace instruction and data pointers (i.e. those
      addressable via TTBR0), which userspace can query via PTRACE_GETREGSET.
      By clearing these bits from pointers (and replacing them with the value
      of bit 55), userspace can acquire the PAC-less versions.
      
      This new regset is exposed when the kernel is built with (user) pointer
      authentication support, and the address authentication feature is
      enabled. Otherwise, the regset is hidden.
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      [will: Fix to use vabits_user instead of VA_BITS and rename macro]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ec6e822d
  27. 12 12月, 2018 1 次提交
    • W
      arm64: mm: Introduce MAX_USER_VA_BITS definition · 9b31cf49
      Will Deacon 提交于
      With the introduction of 52-bit virtual addressing for userspace, we are
      now in a position where the virtual addressing capability of userspace
      may exceed that of the kernel. Consequently, the VA_BITS definition
      cannot be used blindly, since it reflects only the size of kernel
      virtual addresses.
      
      This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52
      depending on whether 52-bit virtual addressing has been configured at
      build time, removing a few places where the 52 is open-coded based on
      explicit CONFIG_ guards.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9b31cf49
  28. 11 12月, 2018 4 次提交