1. 23 4月, 2022 6 次提交
  2. 25 2月, 2022 1 次提交
    • M
      arm64/mte: Add userspace interface for enabling asymmetric mode · 766121ba
      Mark Brown 提交于
      The architecture provides an asymmetric mode for MTE where tag mismatches
      are checked asynchronously for stores but synchronously for loads. Allow
      userspace processes to select this and make it available as a default mode
      via the existing per-CPU sysfs interface.
      
      Since there PR_MTE_TCF_ values are a bitmask (allowing the kernel to choose
      between the multiple modes) and there are no free bits adjacent to the
      existing PR_MTE_TCF_ bits the set of bits used to specify the mode becomes
      disjoint. Programs using the new interface should be aware of this and
      programs that do not use it will not see any change in behaviour.
      
      When userspace requests two possible modes but the system default for the
      CPU is the third mode (eg, default is synchronous but userspace requests
      either asynchronous or asymmetric) the preference order is:
      
         ASYMM > ASYNC > SYNC
      
      This situation is not currently possible since there are only two modes and
      it is mandatory to have a system default so there could be no ambiguity and
      there is no ABI change. The chosen order is basically arbitrary as we do not
      have a clear metric for what is better here.
      
      If userspace requests specifically asymmetric mode via the prctl() and the
      system does not support it then we will return an error, this mirrors
      how we handle the case where userspace enables MTE on a system that does
      not support MTE at all and the behaviour that will be seen if running on
      an older kernel that does not support userspace use of asymmetric mode.
      
      Attempts to set asymmetric mode as the default mode will result in an error
      if the system does not support it.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NVincenzo Frascino <Vincenzo.Frascino@arm.com>
      Tested-by: NBranislav Rankov <branislav.rankov@arm.com>
      Link: https://lore.kernel.org/r/20220216173224.2342152-5-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      766121ba
  3. 21 10月, 2021 3 次提交
    • M
      arm64/sve: Track vector lengths for tasks in an array · 5838a155
      Mark Brown 提交于
      As for SVE we will track a per task SME vector length for tasks. Convert
      the existing storage for the vector length into an array and update
      fpsimd_flush_task() to initialise this in a function.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20211019172247.3045838-10-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      5838a155
    • M
      arm64/sve: Put system wide vector length information into structs · b5bc00ff
      Mark Brown 提交于
      With the introduction of SME we will have a second vector length in the
      system, enumerated and configured in a very similar fashion to the
      existing SVE vector length.  While there are a few differences in how
      things are handled this is a relatively small portion of the overall
      code so in order to avoid code duplication we factor out
      
      We create two structs, one vl_info for the static hardware properties
      and one vl_config for the runtime configuration, with an array
      instantiated for each and update all the users to reference these. Some
      accessor functions are provided where helpful for readability, and the
      write to set the vector length is put into a function since the system
      register being updated needs to be chosen at compile time.
      
      This is a mostly mechanical replacement, further work will be required
      to actually make things generic, ensuring that we handle those places
      where there are differences properly.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20211019172247.3045838-8-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      b5bc00ff
    • M
      arm64/sve: Use accessor functions for vector lengths in thread_struct · 0423eedc
      Mark Brown 提交于
      In a system with SME there are parallel vector length controls for SVE and
      SME vectors which function in much the same way so it is desirable to
      share the code for handling them as much as possible. In order to prepare
      for doing this add a layer of accessor functions for the various VL related
      operations on tasks.
      
      Since almost all current interactions are actually via task->thread rather
      than directly with the thread_info the accessors use that. Accessors are
      provided for both generic and SVE specific usage, the generic accessors
      should be used for cases where register state is being manipulated since
      the registers are shared between streaming and regular SVE so we know that
      when SME support is implemented we will always have to be in the appropriate
      mode already and hence can generalise now.
      
      Since we are using task_struct and we don't want to cause widespread
      inclusion of sched.h the acessors are all out of line, it is hoped that
      none of the uses are in a sufficiently critical path for this to be an
      issue. Those that are most likely to present an issue are in the same
      translation unit so hopefully the compiler may be able to inline anyway.
      
      This is purely adding the layer of abstraction, additional work will be
      needed to support tasks using SME.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20211019172247.3045838-7-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      0423eedc
  4. 15 10月, 2021 1 次提交
  5. 29 7月, 2021 3 次提交
  6. 15 6月, 2021 1 次提交
  7. 07 6月, 2021 1 次提交
  8. 27 5月, 2021 1 次提交
  9. 14 4月, 2021 2 次提交
  10. 25 3月, 2021 1 次提交
  11. 13 1月, 2021 1 次提交
    • C
      arm64: Remove arm64_dma32_phys_limit and its uses · d78050ee
      Catalin Marinas 提交于
      With the introduction of a dynamic ZONE_DMA range based on DT or IORT
      information, there's no need for CMA allocations from the wider
      ZONE_DMA32 since on most platforms ZONE_DMA will cover the 32-bit
      addressable range. Remove the arm64_dma32_phys_limit and set
      arm64_dma_phys_limit to cover the smallest DMA range required on the
      platform. CMA allocation and crashkernel reservation now go in the
      dynamically sized ZONE_DMA, allowing correct functionality on RPi4.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Zhou <chenzhou10@huawei.com>
      Reviewed-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> # On RPi4B
      d78050ee
  12. 04 1月, 2021 1 次提交
  13. 23 12月, 2020 1 次提交
  14. 03 12月, 2020 1 次提交
    • M
      arm64: uaccess: remove set_fs() · 3d2403fd
      Mark Rutland 提交于
      Now that the uaccess primitives dont take addr_limit into account, we
      have no need to manipulate this via set_fs() and get_fs(). Remove
      support for these, along with some infrastructure this renders
      redundant.
      
      We no longer need to flip UAO to access kernel memory under KERNEL_DS,
      and head.S unconditionally clears UAO for all kernel configurations via
      an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
      nor do we need to context-switch it. However, we still need to adjust
      PAN during SDEI entry.
      
      Masking of __user pointers no longer needs to use the dynamic value of
      addr_limit, and can use a constant derived from the maximum possible
      userspace task size. A new TASK_SIZE_MAX constant is introduced for
      this, which is also used by core code. In configurations supporting
      52-bit VAs, this may include a region of unusable VA space above a
      48-bit TTBR0 limit, but never includes any portion of TTBR1.
      
      Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
      KERNEL_DS were inclusive limits, and is converted to a mask by
      subtracting one.
      
      As the SDEI entry code repurposes the otherwise unnecessary
      pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted
      context, for now we rename that to pt_regs::sdei_ttbr1. In future we can
      consider factoring that out.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NJames Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3d2403fd
  15. 29 9月, 2020 3 次提交
    • W
      arm64: Rewrite Spectre-v4 mitigation code · c2876207
      Will Deacon 提交于
      Rewrite the Spectre-v4 mitigation handling code to follow the same
      approach as that taken by Spectre-v2.
      
      For now, report to KVM that the system is vulnerable (by forcing
      'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in
      subsequent steps.
      Signed-off-by: NWill Deacon <will@kernel.org>
      c2876207
    • W
      arm64: Group start_thread() functions together · a8de9498
      Will Deacon 提交于
      The is_ttbrX_addr() functions have somehow ended up in the middle of
      the start_thread() functions, so move them out of the way to keep the
      code readable.
      Signed-off-by: NWill Deacon <will@kernel.org>
      a8de9498
    • W
      arm64: Rewrite Spectre-v2 mitigation code · d4647f0a
      Will Deacon 提交于
      The Spectre-v2 mitigation code is pretty unwieldy and hard to maintain.
      This is largely due to it being written hastily, without much clue as to
      how things would pan out, and also because it ends up mixing policy and
      state in such a way that it is very difficult to figure out what's going
      on.
      
      Rewrite the Spectre-v2 mitigation so that it clearly separates state from
      policy and follows a more structured approach to handling the mitigation.
      Signed-off-by: NWill Deacon <will@kernel.org>
      d4647f0a
  16. 04 9月, 2020 3 次提交
    • C
      arm64: mte: Allow {set,get}_tagged_addr_ctrl() on non-current tasks · 93f067f6
      Catalin Marinas 提交于
      In preparation for ptrace() access to the prctl() value, allow calling
      these functions on non-current tasks.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      93f067f6
    • C
      arm64: mte: Allow user control of the generated random tags via prctl() · af5ce952
      Catalin Marinas 提交于
      The IRG, ADDG and SUBG instructions insert a random tag in the resulting
      address. Certain tags can be excluded via the GCR_EL1.Exclude bitmap
      when, for example, the user wants a certain colour for freed buffers.
      Since the GCR_EL1 register is not accessible at EL0, extend the
      prctl(PR_SET_TAGGED_ADDR_CTRL) interface to include a 16-bit field in
      the first argument for controlling which tags can be generated by the
      above instruction (an include rather than exclude mask). Note that by
      default all non-zero tags are excluded. This setting is per-thread.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      af5ce952
    • C
      arm64: mte: Allow user control of the tag check mode via prctl() · 1c101da8
      Catalin Marinas 提交于
      By default, even if PROT_MTE is set on a memory range, there is no tag
      check fault reporting (SIGSEGV). Introduce a set of option to the
      exiting prctl(PR_SET_TAGGED_ADDR_CTRL) to allow user control of the tag
      check fault mode:
      
        PR_MTE_TCF_NONE  - no reporting (default)
        PR_MTE_TCF_SYNC  - synchronous tag check fault reporting
        PR_MTE_TCF_ASYNC - asynchronous tag check fault reporting
      
      These options translate into the corresponding SCTLR_EL1.TCF0 bitfield,
      context-switched by the kernel. Note that the kernel accesses to the
      user address space (e.g. read() system call) are not checked if the user
      thread tag checking mode is PR_MTE_TCF_NONE or PR_MTE_TCF_ASYNC. If the
      tag checking mode is PR_MTE_TCF_SYNC, the kernel makes a best effort to
      check its user address accesses, however it cannot always guarantee it.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      1c101da8
  17. 21 3月, 2020 1 次提交
  18. 18 3月, 2020 2 次提交
  19. 06 11月, 2019 1 次提交
    • B
      arm64: mm: Remove MAX_USER_VA_BITS definition · 218564b1
      Bhupesh Sharma 提交于
      commit 9b31cf49 ("arm64: mm: Introduce MAX_USER_VA_BITS definition")
      introduced the MAX_USER_VA_BITS definition, which was used to support
      the arm64 mm use-cases where the user-space could use 52-bit virtual
      addresses whereas the kernel-space would still could a maximum of 48-bit
      virtual addressing.
      
      But, now with commit b6d00d47 ("arm64: mm: Introduce 52-bit Kernel
      VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence
      there is no longer any scenario where user VA != kernel VA size
      (even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true).
      
      Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to
      VA_BITS (maximum VA space size) in all possible use-cases. Note that
      even though the 'vabits_actual' value would be 48 for arm64 hardware
      which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52
      is enabled), VA_BITS would still be set to a value 52. Hence this change
      would be safe in all possible VA address space combinations.
      
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: linux-kernel@vger.kernel.org
      Cc: kexec@lists.infradead.org
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NBhupesh Sharma <bhsharma@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      218564b1
  20. 28 10月, 2019 1 次提交
    • J
      arm64: entry-common: don't touch daif before bp-hardening · bfe29874
      James Morse 提交于
      The previous patches mechanically transformed the assembly version of
      entry.S to entry-common.c for synchronous exceptions.
      
      The C version of local_daif_restore() doesn't quite do the same thing
      as the assembly versions if pseudo-NMI is in use. In particular,
      | local_daif_restore(DAIF_PROCCTX_NOIRQ)
      will still allow pNMI to be delivered. This is not the behaviour
      do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not
      be possible for the PMU handler to run as an NMI until the bp-hardening
      sequence has run.
      
      The bp-hardening calls were placed where they are because this was the
      first C code to run after the relevant exceptions. As we've now moved
      that point earlier, move the checks and calls earlier too.
      
      This makes it clearer that this stuff runs before any kind of exception,
      and saves modifying PSTATE twice.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bfe29874
  21. 25 9月, 2019 1 次提交
  22. 09 8月, 2019 2 次提交
  23. 07 8月, 2019 1 次提交
  24. 05 8月, 2019 1 次提交
    • M
      arm64: remove pointless __KERNEL__ guards · b907b80d
      Mark Rutland 提交于
      For a number of years, UAPI headers have been split from kernel-internal
      headers. The latter are never exposed to userspace, and always built
      with __KERNEL__ defined.
      
      Most headers under arch/arm64 don't have __KERNEL__ guards, but there
      are a few stragglers lying around. To make things more consistent, and
      to set a good example going forward, let's remove these redundant
      __KERNEL__ guards.
      
      In a couple of cases, a trailing #endif lacked a comment describing its
      corresponding #if or #ifdef, so these are fixes up at the same time.
      
      Guards in auto-generated crypto code are left as-is, as these guards are
      generated by scripting imported from the upstream openssl project
      scripts. Guards in UAPI headers are left as-is, as these can be included
      by userspace or the kernel.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      b907b80d