1. 23 12月, 2020 1 次提交
  2. 16 12月, 2020 3 次提交
    • M
      arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC · 5d6ad668
      Mike Rapoport 提交于
      The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must
      never fail.  With this assumption is wouldn't be safe to allow general
      usage of this function.
      
      Moreover, some architectures that implement __kernel_map_pages() have this
      function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
      pages when page allocation debugging is disabled at runtime.
      
      As all the users of __kernel_map_pages() were converted to use
      debug_pagealloc_map_pages() it is safe to make it available only when
      DEBUG_PAGEALLOC is set.
      
      Link: https://lkml.kernel.org/r/20201109192128.960-4-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5d6ad668
    • M
      arm, arm64: move free_unused_memmap() to generic mm · 4f5b0c17
      Mike Rapoport 提交于
      ARM and ARM64 free unused parts of the memory map just before the
      initialization of the page allocator. To allow holes in the memory map both
      architectures overload pfn_valid() and define HAVE_ARCH_PFN_VALID.
      
      Allowing holes in the memory map for FLATMEM may be useful for small
      machines, such as ARC and m68k and will enable those architectures to cease
      using DISCONTIGMEM and still support more than one memory bank.
      
      Move the functions that free unused memory map to generic mm and enable
      them in case HAVE_ARCH_PFN_VALID=y.
      
      Link: https://lkml.kernel.org/r/20201101170454.9567-10-rppt@kernel.orgSigned-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Meelis Roos <mroos@linux.ee>
      Cc: Michael Schmitz <schmitzmic@gmail.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f5b0c17
    • K
      arm64: mremap speedup - enable HAVE_MOVE_PUD · f5308c89
      Kalesh Singh 提交于
      HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source
      and destination addresses are PUD-aligned.
      
      With HAVE_MOVE_PUD enabled it can be inferred that there is approximately
      a 19x improvement in performance on arm64.  (See data below).
      
      ------- Test Results ---------
      
      The following results were obtained using a 5.4 kernel, by remapping a
      PUD-aligned, 1GB sized region to a PUD-aligned destination.  The results
      from 10 iterations of the test are given below:
      
      Total mremap times for 1GB data on arm64. All times are in nanoseconds.
      
        Control          HAVE_MOVE_PUD
      
        1247761          74271
        1219896          46771
        1094792          59687
        1227760          48385
        1043698          76666
        1101771          50365
        1159896          52500
        1143594          75261
        1025833          61354
        1078125          48697
      
        1134312.6        59395.7    <-- Mean time in nanoseconds
      
      A 1GB mremap completion time drops from ~1.1 milliseconds to ~59
      microseconds on arm64.  (~19x speed up).
      
      Link: https://lkml.kernel.org/r/20201014005320.2233162-5-kaleshsingh@google.comSigned-off-by: NKalesh Singh <kaleshsingh@google.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Christian Brauner <christian.brauner@ubuntu.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Gavin Shan <gshan@redhat.com>
      Cc: Hassan Naveed <hnaveed@wavecomp.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jia He <justin.he@arm.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Krzysztof Kozlowski <krzk@kernel.org>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Minchan Kim <minchan@google.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: Sami Tolvanen <samitolvanen@google.com>
      Cc: Sandipan Das <sandipan@linux.ibm.com>
      Cc: SeongJae Park <sjpark@amazon.de>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Steven Price <steven.price@arm.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f5308c89
  3. 12 12月, 2020 1 次提交
  4. 04 12月, 2020 1 次提交
  5. 03 12月, 2020 2 次提交
    • M
      arm64: uaccess: remove vestigal UAO support · 1517c4fa
      Mark Rutland 提交于
      Now that arm64 no longer uses UAO, remove the vestigal feature detection
      code and Kconfig text.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1517c4fa
    • M
      arm64: uaccess: remove set_fs() · 3d2403fd
      Mark Rutland 提交于
      Now that the uaccess primitives dont take addr_limit into account, we
      have no need to manipulate this via set_fs() and get_fs(). Remove
      support for these, along with some infrastructure this renders
      redundant.
      
      We no longer need to flip UAO to access kernel memory under KERNEL_DS,
      and head.S unconditionally clears UAO for all kernel configurations via
      an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
      nor do we need to context-switch it. However, we still need to adjust
      PAN during SDEI entry.
      
      Masking of __user pointers no longer needs to use the dynamic value of
      addr_limit, and can use a constant derived from the maximum possible
      userspace task size. A new TASK_SIZE_MAX constant is introduced for
      this, which is also used by core code. In configurations supporting
      52-bit VAs, this may include a region of unusable VA space above a
      48-bit TTBR0 limit, but never includes any portion of TTBR1.
      
      Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
      KERNEL_DS were inclusive limits, and is converted to a mask by
      subtracting one.
      
      As the SDEI entry code repurposes the otherwise unnecessary
      pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted
      context, for now we rename that to pt_regs::sdei_ttbr1. In future we can
      consider factoring that out.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NJames Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3d2403fd
  6. 01 12月, 2020 1 次提交
  7. 28 11月, 2020 1 次提交
    • T
      arm64: Extend the kernel command line from the bootloader · 1e40d105
      Tyler Hicks 提交于
      Provide support for additional kernel command line parameters to be
      concatenated onto the end of the command line provided by the
      bootloader. Additional parameters are specified in the CONFIG_CMDLINE
      option when CONFIG_CMDLINE_EXTEND is selected, matching other
      architectures and leveraging existing support in the FDT and EFI stub
      code.
      
      Special care must be taken for the arch-specific nokaslr parsing. Search
      the bootargs FDT property and the CONFIG_CMDLINE when
      CONFIG_CMDLINE_EXTEND is in use.
      
      There are a couple of known use cases for this feature:
      
      1) Switching between stable and development kernel versions, where one
         of the versions benefits from additional command line parameters,
         such as debugging options.
      2) Specifying additional command line parameters, for additional tuning
         or debugging, when the bootloader does not offer an interactive mode.
      Signed-off-by: NTyler Hicks <tyhicks@linux.microsoft.com>
      Link: https://lore.kernel.org/r/20200921191557.350256-3-tyhicks@linux.microsoft.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1e40d105
  8. 25 11月, 2020 1 次提交
  9. 10 11月, 2020 2 次提交
    • W
      arm64: cpufeatures: Add capability for LDAPR instruction · 364a5a8a
      Will Deacon 提交于
      Armv8.3 introduced the LDAPR instruction, which provides weaker memory
      ordering semantics than LDARi (RCpc vs RCsc). Generally, we provide an
      RCsc implementation when implementing the Linux memory model, but LDAPR
      can be used as a useful alternative to dependency ordering, particularly
      when the compiler is capable of breaking the dependencies.
      
      Since LDAPR is not available on all CPUs, add a cpufeature to detect it at
      runtime and allow the instruction to be used with alternative code
      patching.
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      364a5a8a
    • A
      arm64: mm: extend linear region for 52-bit VA configurations · f4693c27
      Ard Biesheuvel 提交于
      For historical reasons, the arm64 kernel VA space is configured as two
      equally sized halves, i.e., on a 48-bit VA build, the VA space is split
      into a 47-bit vmalloc region and a 47-bit linear region.
      
      When support for 52-bit virtual addressing was added, this equal split
      was kept, resulting in a substantial waste of virtual address space in
      the linear region:
      
                                 48-bit VA                     52-bit VA
        0xffff_ffff_ffff_ffff +-------------+               +-------------+
                              |   vmalloc   |               |   vmalloc   |
        0xffff_8000_0000_0000 +-------------+ _PAGE_END(48) +-------------+
                              |   linear    |               :             :
        0xffff_0000_0000_0000 +-------------+               :             :
                              :             :               :             :
                              :             :               :             :
                              :             :               :             :
                              :             :               :  currently  :
                              :  unusable   :               :             :
                              :             :               :   unused    :
                              :     by      :               :             :
                              :             :               :             :
                              :  hardware   :               :             :
                              :             :               :             :
        0xfff8_0000_0000_0000 :             : _PAGE_END(52) +-------------+
                              :             :               |             |
                              :             :               |             |
                              :             :               |             |
                              :             :               |             |
                              :             :               |             |
                              :  unusable   :               |             |
                              :             :               |   linear    |
                              :     by      :               |             |
                              :             :               |   region    |
                              :  hardware   :               |             |
                              :             :               |             |
                              :             :               |             |
                              :             :               |             |
                              :             :               |             |
                              :             :               |             |
                              :             :               |             |
        0xfff0_0000_0000_0000 +-------------+  PAGE_OFFSET  +-------------+
      
      As illustrated above, the 52-bit VA kernel uses 47 bits for the vmalloc
      space (as before), to ensure that a single 64k granule kernel image can
      support any 64k granule capable system, regardless of whether it supports
      the 52-bit virtual addressing extension. However, due to the fact that
      the VA space is still split in equal halves, the linear region is only
      2^51 bytes in size, wasting almost half of the 52-bit VA space.
      
      Let's fix this, by abandoning the equal split, and simply assigning all
      VA space outside of the vmalloc region to the linear region.
      
      The KASAN shadow region is reconfigured so that it ends at the start of
      the vmalloc region, and grows downwards. That way, the arrangement of
      the vmalloc space (which contains kernel mappings, modules, BPF region,
      the vmemmap array etc) is identical between non-KASAN and KASAN builds,
      which aids debugging.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: NSteve Capper <steve.capper@arm.com>
      Link: https://lore.kernel.org/r/20201008153602.9467-3-ardb@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f4693c27
  10. 03 11月, 2020 1 次提交
  11. 31 10月, 2020 1 次提交
    • A
      timekeeping: default GENERIC_CLOCKEVENTS to enabled · 0774a6ed
      Arnd Bergmann 提交于
      Almost all machines use GENERIC_CLOCKEVENTS, so it feels wrong to
      require each one to select that symbol manually.
      
      Instead, enable it whenever CONFIG_LEGACY_TIMER_TICK is disabled as
      a simplification. It should be possible to select both
      GENERIC_CLOCKEVENTS and LEGACY_TIMER_TICK from an architecture now
      and decide at runtime between the two.
      
      For the clockevents arch-support.txt file, this means that additional
      architectures are marked as TODO when they have at least one machine
      that still uses LEGACY_TIMER_TICK, rather than being marked 'ok' when
      at least one machine has been converted. This means that both m68k and
      arm (for riscpc) revert to TODO.
      
      At this point, we could just always enable CONFIG_GENERIC_CLOCKEVENTS
      rather than leaving it off when not needed. I built an m68k
      defconfig kernel (using gcc-10.1.0) and found that this would add
      around 5.5KB in kernel image size:
      
         text	   data	    bss	    dec	    hex	filename
      3861936	1092236	 196656	5150828	 4e986c	obj-m68k/vmlinux-no-clockevent
      3866201	1093832	 196184	5156217	 4ead79	obj-m68k/vmlinux-clockevent
      
      On Arm (MACH_RPC), that difference appears to be twice as large,
      around 11KB on top of an 6MB vmlinux.
      Reviewed-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Tested-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Reviewed-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      0774a6ed
  12. 29 10月, 2020 1 次提交
  13. 15 10月, 2020 1 次提交
  14. 14 10月, 2020 1 次提交
  15. 09 10月, 2020 1 次提交
  16. 29 9月, 2020 1 次提交
    • W
      arm64: Remove Spectre-related CONFIG_* options · 6e5f0927
      Will Deacon 提交于
      The spectre mitigations are too configurable for their own good, leading
      to confusing logic trying to figure out when we should mitigate and when
      we shouldn't. Although the plethora of command-line options need to stick
      around for backwards compatibility, the default-on CONFIG options that
      depend on EXPERT can be dropped, as the mitigations only do anything if
      the system is vulnerable, a mitigation is available and the command-line
      hasn't disabled it.
      
      Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of
      enabling this code unconditionally.
      Signed-off-by: NWill Deacon <will@kernel.org>
      6e5f0927
  17. 18 9月, 2020 1 次提交
  18. 14 9月, 2020 1 次提交
    • M
      arm64: Allow IPIs to be handled as normal interrupts · d3afc7f1
      Marc Zyngier 提交于
      In order to deal with IPIs as normal interrupts, let's add
      a new way to register them with the architecture code.
      
      set_smp_ipi_range() takes a range of interrupts, and allows
      the arch code to request them as if the were normal interrupts.
      A standard handler is then called by the core IRQ code to deal
      with the IPI.
      
      This means that we don't need to call irq_enter/irq_exit, and
      that we don't need to deal with set_irq_regs either. So let's
      move the dispatcher into its own function, and leave handle_IPI()
      as a compatibility function.
      
      On the sending side, let's make use of ipi_send_mask, which
      already exists for this purpose.
      
      One of the major difference is that we end up, in some cases
      (such as when performing IRQ time accounting on the scheduler
      IPI), end up with nested irq_enter()/irq_exit() pairs.
      Other than the (relatively small) overhead, there should be
      no consequences to it (these pairs are designed to nest
      correctly, and the accounting shouldn't be off).
      Reviewed-by: NValentin Schneider <valentin.schneider@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      d3afc7f1
  19. 11 9月, 2020 3 次提交
  20. 09 9月, 2020 1 次提交
  21. 04 9月, 2020 1 次提交
  22. 29 7月, 2020 1 次提交
  23. 28 7月, 2020 1 次提交
  24. 25 7月, 2020 1 次提交
  25. 22 7月, 2020 1 次提交
  26. 15 7月, 2020 1 次提交
  27. 05 7月, 2020 1 次提交
  28. 03 7月, 2020 1 次提交
    • M
      arm64: Document sysctls for emulated deprecated instructions · dd720784
      Mark Brown 提交于
      We have support for emulating a number of deprecated instructions in the
      kernel with individual Kconfig options enabling this support per
      instruction. In addition to the Kconfig options we also provide runtime
      control via sysctls but this is not currently mentioned in the Kconfig so
      not very discoverable for users. This is particularly important for
      SWP/SWPB since this is disabled by default at runtime and must be enabled
      via the sysctl, causing considerable frustration for users who have enabled
      the config option and are then confused to find that the instruction is
      still faulting.
      
      Add a reference to the sysctls in the help text for each of the config
      options, noting that SWP/SWPB is disabled by default, to improve the
      user experience.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Link: https://lore.kernel.org/r/20200625131507.32334-1-broonie@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dd720784
  29. 23 6月, 2020 1 次提交
    • M
      arm64: Depend on newer binutils when building PAC · 4dc9b282
      Mark Brown 提交于
      Versions of binutils prior to 2.33.1 don't understand the ELF notes that
      are added by modern compilers to indicate the PAC and BTI options used
      to build the code. This causes them to emit large numbers of warnings in
      the form:
      
      aarch64-linux-gnu-nm: warning: .tmp_vmlinux.kallsyms2: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      
      during the kernel build which is currently causing quite a bit of
      disruption for automated build testing using clang.
      
      In commit 15cd0e67 (arm64: Kconfig: ptrauth: Add binutils version
      check to fix mismatch) we added a dependency on binutils to avoid this
      issue when building with versions of GCC that emit the notes but did not
      do so for clang as it was believed that the existing check for
      .cfi_negate_ra_state was already requiring a new enough binutils. This
      does not appear to be the case for some versions of binutils (eg, the
      binutils in Debian 10) so instead refactor so we require a new enough
      GNU binutils in all cases other than when we are using an old GCC
      version that does not emit notes.
      
      Other, more exotic, combinations of tools are possible such as using
      clang, lld and gas together are possible and may have further problems
      but rather than adding further version checks it looks like the most
      robust thing will be to just test that we can build cleanly with the
      configured tools but that will require more review and discussion so do
      this for now to address the immediate problem disrupting build testing.
      Reported-by: NKernelCI <bot@kernelci.org>
      Reported-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Link: https://github.com/ClangBuiltLinux/linux/issues/1054
      Link: https://lore.kernel.org/r/20200619123550.48098-1-broonie@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      4dc9b282
  30. 22 6月, 2020 1 次提交
  31. 17 6月, 2020 2 次提交
    • W
      arm64: bti: Require clang >= 10.0.1 for in-kernel BTI support · b9249cba
      Will Deacon 提交于
      Unfortunately, most versions of clang that support BTI are capable of
      miscompiling the kernel when converting a switch statement into a jump
      table. As an example, attempting to spawn a KVM guest results in a panic:
      
      [   56.253312] Kernel panic - not syncing: bad mode
      [   56.253834] CPU: 0 PID: 279 Comm: lkvm Not tainted 5.8.0-rc1 #2
      [   56.254225] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
      [   56.254712] Call trace:
      [   56.254952]  dump_backtrace+0x0/0x1d4
      [   56.255305]  show_stack+0x1c/0x28
      [   56.255647]  dump_stack+0xc4/0x128
      [   56.255905]  panic+0x16c/0x35c
      [   56.256146]  bad_el0_sync+0x0/0x58
      [   56.256403]  el1_sync_handler+0xb4/0xe0
      [   56.256674]  el1_sync+0x7c/0x100
      [   56.256928]  kvm_vm_ioctl_check_extension_generic+0x74/0x98
      [   56.257286]  __arm64_sys_ioctl+0x94/0xcc
      [   56.257569]  el0_svc_common+0x9c/0x150
      [   56.257836]  do_el0_svc+0x84/0x90
      [   56.258083]  el0_sync_handler+0xf8/0x298
      [   56.258361]  el0_sync+0x158/0x180
      
      This is because the switch in kvm_vm_ioctl_check_extension_generic()
      is executed as an indirect branch to tail-call through a jump table:
      
      ffff800010032dc8:       3869694c        ldrb    w12, [x10, x9]
      ffff800010032dcc:       8b0c096b        add     x11, x11, x12, lsl #2
      ffff800010032dd0:       d61f0160        br      x11
      
      However, where the target case uses the stack, the landing pad is elided
      due to the presence of a paciasp instruction:
      
      ffff800010032e14:       d503233f        paciasp
      ffff800010032e18:       a9bf7bfd        stp     x29, x30, [sp, #-16]!
      ffff800010032e1c:       910003fd        mov     x29, sp
      ffff800010032e20:       aa0803e0        mov     x0, x8
      ffff800010032e24:       940017c0        bl      ffff800010038d24 <kvm_vm_ioctl_check_extension>
      ffff800010032e28:       93407c00        sxtw    x0, w0
      ffff800010032e2c:       a8c17bfd        ldp     x29, x30, [sp], #16
      ffff800010032e30:       d50323bf        autiasp
      ffff800010032e34:       d65f03c0        ret
      
      Unfortunately, this results in a fatal exception because paciasp is
      compatible only with branch-and-link (call) instructions and not simple
      indirect branches.
      
      A fix is being merged into Clang 10.0.1 so that a 'bti j' instruction is
      emitted as an explicit landing pad in this situation. Make in-kernel
      BTI depend on that compiler version when building with clang.
      
      Cc: Tom Stellard <tstellar@redhat.com>
      Cc: Daniel Kiss <daniel.kiss@arm.com>
      Reviewed-by: NMark Brown <broonie@kernel.org>
      Acked-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NNathan Chancellor <natechancellor@gmail.com>
      Acked-by: NNick Desaulniers <ndesaulniers@google.com>
      Link: https://lore.kernel.org/r/20200615105524.GA2694@willie-the-truck
      Link: https://lore.kernel.org/r/20200616183630.2445-1-will@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      b9249cba
    • M
      kconfig: unify cc-option and as-option · 4d0831e8
      Masahiro Yamada 提交于
      cc-option and as-option are almost the same; both pass the flag to
      $(CC). The main difference is the cc-option stops before the assemble
      stage (-S option) whereas as-option stops after (-c option).
      
      I chose -S because it is slightly faster, but $(cc-option,-gz=zlib)
      returns a wrong result (https://lkml.org/lkml/2020/6/9/1529).
      It has been fixed by commit 7b169944 ("Makefile: Improve compressed
      debug info support detection"), but the assembler should always be
      invoked for more reliable compiler option tests.
      
      However, you cannot simply replace -S with -c because the following
      code in lib/Kconfig.debug would break:
      
          depends on $(cc-option,-gsplit-dwarf)
      
      The combination of -c and -gsplit-dwarf does not accept /dev/null as
      output.
      
        $ cat /dev/null | gcc -gsplit-dwarf -S -x c - -o /dev/null
        $ echo $?
        0
      
        $ cat /dev/null | gcc -gsplit-dwarf -c -x c - -o /dev/null
        objcopy: Warning: '/dev/null' is not an ordinary file
        $ echo $?
        1
      
        $ cat /dev/null | gcc -gsplit-dwarf -c -x c - -o tmp.o
        $ echo $?
        0
      
      There is another flag that creates an separate file based on the
      object file path:
      
        $ cat /dev/null | gcc -ftest-coverage -c -x c - -o /dev/null
        <stdin>:1: error: cannot open /dev/null.gcno
      
      So, we cannot use /dev/null to sink the output.
      
      Align the cc-option implementation with scripts/Kbuild.include.
      
      With -c option used in cc-option, as-option is unneeded.
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      Acked-by: NWill Deacon <will@kernel.org>
      4d0831e8
  32. 14 6月, 2020 1 次提交
    • M
      treewide: replace '---help---' in Kconfig files with 'help' · a7f7f624
      Masahiro Yamada 提交于
      Since commit 84af7a61 ("checkpatch: kconfig: prefer 'help' over
      '---help---'"), the number of '---help---' has been gradually
      decreasing, but there are still more than 2400 instances.
      
      This commit finishes the conversion. While I touched the lines,
      I also fixed the indentation.
      
      There are a variety of indentation styles found.
      
        a) 4 spaces + '---help---'
        b) 7 spaces + '---help---'
        c) 8 spaces + '---help---'
        d) 1 space + 1 tab + '---help---'
        e) 1 tab + '---help---'    (correct indentation)
        f) 1 tab + 1 space + '---help---'
        g) 1 tab + 2 spaces + '---help---'
      
      In order to convert all of them to 1 tab + 'help', I ran the
      following commend:
      
        $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      a7f7f624
  33. 11 6月, 2020 1 次提交