1. 25 5月, 2020 1 次提交
  2. 16 5月, 2020 3 次提交
  3. 23 4月, 2020 1 次提交
    • M
      arch: split MODULE_ARCH_VERMAGIC definitions out to <asm/vermagic.h> · 62d0fd59
      Masahiro Yamada 提交于
      As the bug report [1] pointed out, <linux/vermagic.h> must be included
      after <linux/module.h>.
      
      I believe we should not impose any include order restriction. We often
      sort include directives alphabetically, but it is just coding style
      convention. Technically, we can include header files in any order by
      making every header self-contained.
      
      Currently, arch-specific MODULE_ARCH_VERMAGIC is defined in
      <asm/module.h>, which is not included from <linux/vermagic.h>.
      
      Hence, the straight-forward fix-up would be as follows:
      
      |--- a/include/linux/vermagic.h
      |+++ b/include/linux/vermagic.h
      |@@ -1,5 +1,6 @@
      | /* SPDX-License-Identifier: GPL-2.0 */
      | #include <generated/utsrelease.h>
      |+#include <linux/module.h>
      |
      | /* Simply sanity version stamp for modules. */
      | #ifdef CONFIG_SMP
      
      This works enough, but for further cleanups, I split MODULE_ARCH_VERMAGIC
      definitions into <asm/vermagic.h>.
      
      With this, <linux/module.h> and <linux/vermagic.h> will be orthogonal,
      and the location of MODULE_ARCH_VERMAGIC definitions will be consistent.
      
      For arc and ia64, MODULE_PROC_FAMILY is only used for defining
      MODULE_ARCH_VERMAGIC. I squashed it.
      
      For hexagon, nds32, and xtensa, I removed <asm/modules.h> entirely
      because they contained nothing but MODULE_ARCH_VERMAGIC definition.
      Kbuild will automatically generate <asm/modules.h> at build-time,
      wrapping <asm-generic/module.h>.
      
      [1] https://lore.kernel.org/lkml/20200411155623.GA22175@zn.tnicReported-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      Acked-by: NJessica Yu <jeyu@kernel.org>
      62d0fd59
  4. 21 4月, 2020 1 次提交
    • M
      arm64: sync kernel APIAKey when installing · 3fabb438
      Mark Rutland 提交于
      A direct write to a APxxKey_EL1 register requires a context
      synchronization event to ensure that indirect reads made by subsequent
      instructions (e.g. AUTIASP, PACIASP) observe the new value.
      
      When we initialize the boot task's APIAKey in boot_init_stack_canary()
      via ptrauth_keys_switch_kernel() we miss the necessary ISB, and so there
      is a window where instructions are not guaranteed to use the new APIAKey
      value. This has been observed to result in boot-time crashes where
      PACIASP and AUTIASP within a function used a mixture of the old and new
      key values.
      
      Fix this by having ptrauth_keys_switch_kernel() synchronize the new key
      value with an ISB. At the same time, __ptrauth_key_install() is renamed
      to __ptrauth_key_install_nosync() so that it is obvious that this
      performs no synchronization itself.
      
      Fixes: 28321582 ("arm64: initialize ptrauth keys for kernel booting task")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NWill Deacon <will@kernel.org>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NWill Deacon <will@kernel.org>
      3fabb438
  5. 15 4月, 2020 1 次提交
    • F
      arm64: Delete the space separator in __emit_inst · c9a4ef66
      Fangrui Song 提交于
      In assembly, many instances of __emit_inst(x) expand to a directive. In
      a few places __emit_inst(x) is used as an assembler macro argument. For
      example, in arch/arm64/kvm/hyp/entry.S
      
        ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN)
      
      expands to the following by the C preprocessor:
      
        alternative_insn nop, .inst (0xd500401f | ((0) << 16 | (4) << 5) | ((!!1) << 8)), 4, 1
      
      Both comma and space are separators, with an exception that content
      inside a pair of parentheses/quotes is not split, so the clang
      integrated assembler splits the arguments to:
      
         nop, .inst, (0xd500401f | ((0) << 16 | (4) << 5) | ((!!1) << 8)), 4, 1
      
      GNU as preprocesses the input with do_scrub_chars(). Its arm64 backend
      (along with many other non-x86 backends) sees:
      
        alternative_insn nop,.inst(0xd500401f|((0)<<16|(4)<<5)|((!!1)<<8)),4,1
        # .inst(...) is parsed as one argument
      
      while its x86 backend sees:
      
        alternative_insn nop,.inst (0xd500401f|((0)<<16|(4)<<5)|((!!1)<<8)),4,1
        # The extra space before '(' makes the whole .inst (...) parsed as two arguments
      
      The non-x86 backend's behavior is considered unintentional
      (https://sourceware.org/bugzilla/show_bug.cgi?id=25750).
      So drop the space separator inside `.inst (...)` to make the clang
      integrated assembler work.
      Suggested-by: NIlie Halip <ilie.halip@gmail.com>
      Signed-off-by: NFangrui Song <maskray@google.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://github.com/ClangBuiltLinux/linux/issues/939Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c9a4ef66
  6. 11 4月, 2020 1 次提交
    • A
      mm/vma: define a default value for VM_DATA_DEFAULT_FLAGS · c62da0c3
      Anshuman Khandual 提交于
      There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS
      This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the
      existing VM_STACK_DEFAULT_FLAGS.  While here, also define some more
      macros with standard VMA access flag combinations that are used
      frequently across many platforms.  Apart from simplification, this
      reduces code duplication as well.
      Signed-off-by: NAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Chris Zankel <chris@zankel.net>
      Link: http://lkml.kernel.org/r/1583391014-8170-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c62da0c3
  7. 03 4月, 2020 1 次提交
    • M
      asm-generic: make more kernel-space headers mandatory · 630f289b
      Masahiro Yamada 提交于
      Change a header to mandatory-y if both of the following are met:
      
      [1] At least one architecture (except um) specifies it as generic-y in
          arch/*/include/asm/Kbuild
      
      [2] Every architecture (except um) either has its own implementation
          (arch/*/include/asm/*.h) or specifies it as generic-y in
          arch/*/include/asm/Kbuild
      
      This commit was generated by the following shell script.
      
      ----------------------------------->8-----------------------------------
      
      arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d')
      
      tmpfile=$(mktemp)
      
      grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile
      
      find arch -path 'arch/*/include/asm/Kbuild' |
      	xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u |
      while read header
      do
      	mandatory=yes
      
      	for arch in $arches
      	do
      		if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild &&
      			! [ -f arch/$arch/include/asm/$header ]; then
      			mandatory=no
      			break
      		fi
      	done
      
      	if [ "$mandatory" = yes ]; then
      		echo "mandatory-y += $header" >> $tmpfile
      
      		for arch in $arches
      		do
      			sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild
      		done
      	fi
      
      done
      
      sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild
      
      LANG=C sort $tmpfile >> include/asm-generic/Kbuild
      
      ----------------------------------->8-----------------------------------
      
      One obvious benefit is the diff stat:
      
       25 files changed, 52 insertions(+), 557 deletions(-)
      
      It is tedious to list generic-y for each arch that needs it.
      
      So, mandatory-y works like a fallback default (by just wrapping
      asm-generic one) when arch does not have a specific header
      implementation.
      
      See the following commits:
      
      def3f7ce
      a1b39bae
      
      It is tedious to convert headers one by one, so I processed by a shell
      script.
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Michal Simek <michal.simek@xilinx.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Link: http://lkml.kernel.org/r/20200210175452.5030-1-masahiroy@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      630f289b
  8. 02 4月, 2020 1 次提交
    • A
      arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature · e16e65a0
      Ard Biesheuvel 提交于
      When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with
      different permissions (r-x for .text, r-- for .rodata, rw- for .data,
      etc) are rounded up to 2 MiB so they can be mapped more efficiently.
      In particular, it permits the segments to be mapped using level 2
      block entries when using 4k pages, which is expected to result in less
      TLB pressure.
      
      However, the mappings for the bulk of the kernel will use level 2
      entries anyway, and the misaligned fringes are organized such that they
      can take advantage of the contiguous bit, and use far fewer level 3
      entries than would be needed otherwise.
      
      This makes the value of this feature dubious at best, and since it is not
      enabled in defconfig or in the distro configs, it does not appear to be
      in wide use either. So let's just remove it.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will@kernel.org>
      Acked-by: NLaura Abbott <labbott@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      e16e65a0
  9. 28 3月, 2020 1 次提交
  10. 25 3月, 2020 2 次提交
  11. 24 3月, 2020 2 次提交
  12. 21 3月, 2020 5 次提交
  13. 20 3月, 2020 2 次提交
  14. 19 3月, 2020 1 次提交
    • W
      arm64: kpti: Fix "kpti=off" when KASLR is enabled · c8355785
      Will Deacon 提交于
      Enabling KASLR forces the use of non-global page-table entries for kernel
      mappings, as this is a decision that we have to make very early on before
      mapping the kernel proper. When used in conjunction with the "kpti=off"
      command-line option, it is possible to use non-global kernel mappings but
      with the kpti trampoline disabled.
      
      Since commit 09e3c22a ("arm64: Use a variable to store non-global
      mappings decision"), arm64_kernel_unmapped_at_el0() reflects only the use of
      non-global mappings and does not take into account whether the kpti
      trampoline is enabled. This breaks context switching of the TPIDRRO_EL0
      register for 64-bit tasks, where the clearing of the register is deferred to
      the ret-to-user code, but it also breaks the ARM SPE PMU driver which
      helpfully recommends passing "kpti=off" on the command line!
      
      Report whether or not KPTI is actually enabled in
      arm64_kernel_unmapped_at_el0() and check the 'arm64_use_ng_mappings' global
      variable directly when determining the protection flags for kernel mappings.
      
      Cc: Mark Brown <broonie@kernel.org>
      Reported-by: NHongbo Yao <yaohongbo@huawei.com>
      Tested-by: NHongbo Yao <yaohongbo@huawei.com>
      Fixes: 09e3c22a ("arm64: Use a variable to store non-global mappings decision")
      Signed-off-by: NWill Deacon <will@kernel.org>
      c8355785
  15. 18 3月, 2020 15 次提交
  16. 14 3月, 2020 1 次提交
    • M
      arm64: cpufeature: add cpus_have_final_cap() · 1db5cdec
      Mark Rutland 提交于
      When cpus_have_const_cap() was originally introduced it was intended to
      be safe in hyp context, where it is not safe to access the cpu_hwcaps
      array as cpus_have_cap() did. For more details see commit:
      
        a4023f68 ("arm64: Add hypervisor safe helper for checking constant capabilities")
      
      We then made use of cpus_have_const_cap() throughout the kernel.
      
      Subsequently, we had to defer updating the static_key associated with
      each capability in order to avoid lockdep complaints. To avoid breaking
      kernel-wide usage of cpus_have_const_cap(), this was updated to fall
      back to the cpu_hwcaps array if called before the static_keys were
      updated. As the kvm hyp code was only called later than this, the
      fallback is redundant but not functionally harmful. For more details,
      see commit:
      
        63a1e1c9 ("arm64/cpufeature: don't use mutex in bringup path")
      
      Today we have more users of cpus_have_const_cap() which are only called
      once the relevant static keys are initialized, and it would be
      beneficial to avoid the redundant code.
      
      To that end, this patch adds a new cpus_have_final_cap(), helper which
      is intend to be used in code which is only run once capabilities have
      been finalized, and will never check the cpus_hwcap array. This helps
      the compiler to generate better code as it no longer needs to generate
      code to address and test the cpus_hwcap array. To help catch misuse,
      cpus_have_final_cap() will BUG() if called before capabilities are
      finalized.
      
      In hyp context, BUG() will result in a hyp panic, but the specific BUG()
      instance will not be identified in the usual way.
      
      Comments are added to the various cpus_have_*_cap() helpers to describe
      the constraints on when they can be used. For clarity cpus_have_cap() is
      moved above the other helpers. Similarly the helpers are updated to use
      system_capabilities_finalized() consistently, and this is made
      __always_inline as required by its new callers.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1db5cdec
  17. 10 3月, 2020 1 次提交