1. 21 3月, 2019 1 次提交
  2. 06 3月, 2019 1 次提交
  3. 01 3月, 2019 1 次提交
    • Z
      arm64: Add workaround for Fujitsu A64FX erratum 010001 · 3e32131a
      Zhang Lei 提交于
      On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause
      an undefined fault (Data abort, DFSC=0b111111). This fault occurs under
      a specific hardware condition when a load/store instruction performs an
      address translation. Any load/store instruction, except non-fault access
      including Armv8 and SVE might cause this undefined fault.
      
      The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE
      is enabled to mitigate timing attacks against KASLR where the kernel
      address space could be probed using the FFR and suppressed fault on
      SVE loads.
      
      Since this erratum causes spurious exceptions, which may corrupt
      the exception registers, we clear the TCR_ELx.NFDx=1 bits when
      booting on an affected CPU.
      Signed-off-by: NZhang Lei <zhang.lei@jp.fujitsu.com>
      [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler
       and always disabled the NFDx bits on affected CPUs]
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Tested-by: Nzhang.lei <zhang.lei@jp.fujitsu.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3e32131a
  4. 20 2月, 2019 1 次提交
  5. 14 2月, 2019 3 次提交
  6. 06 2月, 2019 1 次提交
  7. 22 1月, 2019 1 次提交
  8. 29 12月, 2018 1 次提交
  9. 21 12月, 2018 1 次提交
  10. 20 12月, 2018 1 次提交
  11. 14 12月, 2018 2 次提交
  12. 13 12月, 2018 1 次提交
  13. 12 12月, 2018 2 次提交
    • R
      arm64: Add memory hotplug support · 4ab21506
      Robin Murphy 提交于
      Wire up the basic support for hot-adding memory. Since memory hotplug
      is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also
      cross-check the presence of a section in the manner of the generic
      implementation, before falling back to memblock to check for no-map
      regions within a present section as before. By having arch_add_memory(()
      create the linear mapping first, this then makes everything work in the
      way that __add_section() expects.
      
      We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates
      should be safe from races by virtue of the global device hotplug lock.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4ab21506
    • A
      arm64: fix ARM64_USER_VA_BITS_52 builds · 4d08d20f
      Arnd Bergmann 提交于
      In some randconfig builds, the new CONFIG_ARM64_USER_VA_BITS_52
      triggered a build failure:
      
      arch/arm64/mm/proc.S:287: Error: immediate out of range
      
      As it turns out, we were incorrectly setting PGTABLE_LEVELS here,
      lacking any other default value.
      This fixes the calculation of CONFIG_PGTABLE_LEVELS to consider
      all combinations again.
      
      Fixes: 68d23da4 ("arm64: Kconfig: Re-jig CONFIG options for 52-bit VA")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4d08d20f
  14. 11 12月, 2018 3 次提交
    • W
      arm64: Kconfig: Re-jig CONFIG options for 52-bit VA · 68d23da4
      Will Deacon 提交于
      Enabling 52-bit VAs for userspace is pretty confusing, since it requires
      you to select "48-bit" virtual addressing in the Kconfig.
      
      Rework the logic so that 52-bit user virtual addressing is advertised in
      the "Virtual address space size" choice, along with some help text to
      describe its interaction with Pointer Authentication. The EXPERT-only
      option to force all user mappings to the 52-bit range is then made
      available immediately below the VA size selection.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      68d23da4
    • S
      arm64: mm: Allow forcing all userspace addresses to 52-bit · b9567720
      Steve Capper 提交于
      On arm64 52-bit VAs are provided to userspace when a hint is supplied to
      mmap. This helps maintain compatibility with software that expects at
      most 48-bit VAs to be returned.
      
      In order to help identify software that has 48-bit VA assumptions, this
      patch allows one to compile a kernel where 52-bit VAs are returned by
      default on HW that supports it.
      
      This feature is intended to be for development systems only.
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b9567720
    • S
      arm64: mm: introduce 52-bit userspace support · 67e7fdfc
      Steve Capper 提交于
      On arm64 there is optional support for a 52-bit virtual address space.
      To exploit this one has to be running with a 64KB page size and be
      running on hardware that supports this.
      
      For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
      some changes are needed to support a 52-bit userspace:
       * TCR_EL1.T0SZ needs to be 12 instead of 16,
       * TASK_SIZE needs to reflect the new size.
      
      This patch implements the above when the support for 52-bit VAs is
      detected at early boot time.
      
      On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
      well as userspace, TTBR0_EL1 controls:
       * The identity mapping,
       * EFI runtime code.
      
      It is possible to run a kernel with an identity mapping that has a
      larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
      would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
      52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
      12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
      disabled.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      67e7fdfc
  15. 10 12月, 2018 1 次提交
  16. 06 12月, 2018 4 次提交
  17. 02 12月, 2018 2 次提交
  18. 30 11月, 2018 1 次提交
    • C
      arm64: Add workaround for Cortex-A76 erratum 1286807 · ce8c80c5
      Catalin Marinas 提交于
      On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual address
      for a cacheable mapping of a location is being accessed by a core while
      another core is remapping the virtual address to a new physical page
      using the recommended break-before-make sequence, then under very rare
      circumstances TLBI+DSB completes before a read using the translation
      being invalidated has been observed by other observers. The workaround
      repeats the TLBI+DSB operation and is shared with the Qualcomm Falkor
      erratum 1009
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ce8c80c5
  19. 23 11月, 2018 3 次提交
  20. 20 11月, 2018 1 次提交
    • A
      arm64: mm: apply r/o permissions of VM areas to its linear alias as well · c55191e9
      Ard Biesheuvel 提交于
      On arm64, we use block mappings and contiguous hints to map the linear
      region, to minimize the TLB footprint. However, this means that the
      entire region is mapped using read/write permissions, which we cannot
      modify at page granularity without having to take intrusive measures to
      prevent TLB conflicts.
      
      This means the linear aliases of pages belonging to read-only mappings
      (executable or otherwise) in the vmalloc region are also mapped read/write,
      and could potentially be abused to modify things like module code, bpf JIT
      code or other read-only data.
      
      So let's fix this, by extending the set_memory_ro/rw routines to take
      the linear alias into account. The consequence of enabling this is
      that we can no longer use block mappings or contiguous hints, so in
      cases where the TLB footprint of the linear region is a bottleneck,
      performance may be affected.
      
      Therefore, allow this feature to be runtime en/disabled, by setting
      rodata=full (or 'on' to disable just this enhancement, or 'off' to
      disable read-only mappings for code and r/o data entirely) on the
      kernel command line. Also, allow the default value to be set via a
      Kconfig option.
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c55191e9
  21. 31 10月, 2018 2 次提交
  22. 19 10月, 2018 1 次提交
    • C
      arm64: use the generic swiotlb_dma_ops · 886643b7
      Christoph Hellwig 提交于
      Now that the generic swiotlb code supports non-coherent DMA we can switch
      to it for arm64.  For that we need to refactor the existing
      alloc/free/mmap/pgprot helpers to be used as the architecture hooks,
      and implement the standard arch_sync_dma_for_{device,cpu} hooks for
      cache maintaincance in the streaming dma hooks, which also implies
      using the generic dma_coherent flag in struct device.
      
      Note that we need to keep the old is_device_dma_coherent function around
      for now, so that the shared arm/arm64 Xen code keeps working.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      886643b7
  23. 03 10月, 2018 1 次提交
    • A
      arm64: arch_timer: avoid unused function warning · 040f3401
      Arnd Bergmann 提交于
      arm64_1188873_read_cntvct_el0() is protected by the correct
      CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is
      also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section,
      and causes a warning if that is disabled:
      
      drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function]
      
      Since the erratum requires that we always apply the workaround
      in the timer driver, select that symbol as we do for SoC
      specific errata.
      
      Fixes: 95b861a4 ("arm64: arch_timer: Add workaround for ARM erratum 1188873")
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      040f3401
  24. 01 10月, 2018 1 次提交
  25. 27 9月, 2018 1 次提交
    • A
      arm64/kernel: jump_label: Switch to relative references · c296146c
      Ard Biesheuvel 提交于
      On a randomly chosen distro kernel build for arm64, vmlinux.o shows the
      following sections, containing jump label entries, and the associated
      RELA relocation records, respectively:
      
        ...
        [38088] __jump_table      PROGBITS         0000000000000000  00e19f30
             000000000002ea10  0000000000000000  WA       0     0     8
        [38089] .rela__jump_table RELA             0000000000000000  01fd8bb0
             000000000008be30  0000000000000018   I      38178   38088     8
        ...
      
      In other words, we have 190 KB worth of 'struct jump_entry' instances,
      and 573 KB worth of RELA entries to relocate each entry's code, target
      and key members. This means the RELA section occupies 10% of the .init
      segment, and the two sections combined represent 5% of vmlinux's entire
      memory footprint.
      
      So let's switch from 64-bit absolute references to 32-bit relative
      references for the code and target field, and a 64-bit relative
      reference for the 'key' field (which may reside in another module or the
      core kernel, which may be more than 4 GB way on arm64 when running with
      KASLR enable): this reduces the size of the __jump_table by 33%, and
      gets rid of the RELA section entirely.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-s390@vger.kernel.org
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Link: https://lkml.kernel.org/r/20180919065144.25010-4-ard.biesheuvel@linaro.org
      c296146c
  26. 21 9月, 2018 1 次提交
    • J
      arm64: Kconfig: Remove ARCH_HAS_HOLES_MEMORYMODEL · 8a695a58
      James Morse 提交于
      include/linux/mmzone.h describes ARCH_HAS_HOLES_MEMORYMODEL as
      relevant when parts the memmap have been free()d. This would
      happen on systems where memory is smaller than a sparsemem-section,
      and the extra struct pages are expensive. pfn_valid() on these
      systems returns true for the whole sparsemem-section, so an extra
      memmap_valid_within() check is needed.
      
      On arm64 we have nomap memory, so always provide pfn_valid() to test
      for nomap pages. This means ARCH_HAS_HOLES_MEMORYMODEL's extra checks
      are already rolled up into pfn_valid().
      
      Remove it.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8a695a58
  27. 18 9月, 2018 1 次提交
    • V
      arm64: mm: Support Common Not Private translations · 5ffdfaed
      Vladimir Murzin 提交于
      Common Not Private (CNP) is a feature of ARMv8.2 extension which
      allows translation table entries to be shared between different PEs in
      the same inner shareable domain, so the hardware can use this fact to
      optimise the caching of such entries in the TLB.
      
      CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to
      the hardware that the translation table entries pointed to by this
      TTBR are the same as every PE in the same inner shareable domain for
      which the equivalent TTBR also has CNP bit set. In case CNP bit is set
      but TTBR does not point at the same translation table entries for a
      given ASID and VMID, then the system is mis-configured, so the results
      of translations are UNPREDICTABLE.
      
      For kernel we postpone setting CNP till all cpus are up and rely on
      cpufeature framework to 1) patch the code which is sensitive to CNP
      and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be
      reprogrammed as result of hibernation or cpuidle (via __enable_mmu).
      For these two cases we restore CnP bit via __cpu_suspend_exit().
      
      There are a few cases we need to care of changes in TTBR0_EL1:
        - a switch to idmap
        - software emulated PAN
      
      we rule out latter via Kconfig options and for the former we make
      sure that CNP is set for non-zero ASIDs only.
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      [catalin.marinas@arm.com: default y for CONFIG_ARM64_CNP]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5ffdfaed