1. 14 6月, 2020 1 次提交
    • M
      treewide: replace '---help---' in Kconfig files with 'help' · a7f7f624
      Masahiro Yamada 提交于
      Since commit 84af7a61 ("checkpatch: kconfig: prefer 'help' over
      '---help---'"), the number of '---help---' has been gradually
      decreasing, but there are still more than 2400 instances.
      
      This commit finishes the conversion. While I touched the lines,
      I also fixed the indentation.
      
      There are a variety of indentation styles found.
      
        a) 4 spaces + '---help---'
        b) 7 spaces + '---help---'
        c) 8 spaces + '---help---'
        d) 1 space + 1 tab + '---help---'
        e) 1 tab + '---help---'    (correct indentation)
        f) 1 tab + 1 space + '---help---'
        g) 1 tab + 2 spaces + '---help---'
      
      In order to convert all of them to 1 tab + 'help', I ran the
      following commend:
      
        $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      a7f7f624
  2. 19 5月, 2020 1 次提交
  3. 15 5月, 2020 2 次提交
    • S
      scs: Disable when function graph tracing is enabled · ddc9863e
      Sami Tolvanen 提交于
      The graph tracer hooks returns by modifying frame records on the
      (regular) stack, but with SCS the return address is taken from the
      shadow stack, and the value in the frame record has no effect. As we
      don't currently have a mechanism to determine the corresponding slot
      on the shadow stack (and to pass this through the ftrace
      infrastructure), for now let's disable SCS when the graph tracer is
      enabled.
      
      With SCS the return address is taken from the shadow stack and the
      value in the frame record has no effect. The mcount based graph tracer
      hooks returns by modifying frame records on the (regular) stack, and
      thus is not compatible. The patchable-function-entry graph tracer
      used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved
      to the shadow stack, and is compatible.
      
      Modifying the mcount based graph tracer to work with SCS would require
      a mechanism to determine the corresponding slot on the shadow stack
      (and to pass this through the ftrace infrastructure), and we expect
      that everyone will eventually move to the patchable-function-entry
      based graph tracer anyway, so for now let's disable SCS when the
      mcount-based graph tracer is enabled.
      
      SCS and patchable-function-entry are both supported from LLVM 10.x.
      Signed-off-by: NSami Tolvanen <samitolvanen@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      ddc9863e
    • S
      scs: Add support for Clang's Shadow Call Stack (SCS) · d08b9f0c
      Sami Tolvanen 提交于
      This change adds generic support for Clang's Shadow Call Stack,
      which uses a shadow stack to protect return addresses from being
      overwritten by an attacker. Details are available here:
      
        https://clang.llvm.org/docs/ShadowCallStack.html
      
      Note that security guarantees in the kernel differ from the ones
      documented for user space. The kernel must store addresses of
      shadow stacks in memory, which means an attacker capable reading
      and writing arbitrary memory may be able to locate them and hijack
      control flow by modifying the stacks.
      Signed-off-by: NSami Tolvanen <samitolvanen@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      [will: Numerous cosmetic changes]
      Signed-off-by: NWill Deacon <will@kernel.org>
      d08b9f0c
  4. 13 5月, 2020 1 次提交
  5. 16 3月, 2020 3 次提交
  6. 06 3月, 2020 1 次提交
  7. 14 2月, 2020 1 次提交
    • F
      context-tracking: Introduce CONFIG_HAVE_TIF_NOHZ · 490f561b
      Frederic Weisbecker 提交于
      A few archs (x86, arm, arm64) don't rely anymore on TIF_NOHZ to call
      into context tracking on user entry/exit but instead use static keys
      (or not) to optimize those calls. Ideally every arch should migrate to
      that behaviour in the long run.
      
      Settle a config option to let those archs remove their TIF_NOHZ
      definitions.
      Signed-off-by: NFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      490f561b
  8. 04 2月, 2020 6 次提交
  9. 05 12月, 2019 1 次提交
  10. 02 12月, 2019 1 次提交
  11. 25 11月, 2019 1 次提交
  12. 23 11月, 2019 1 次提交
  13. 15 11月, 2019 2 次提交
    • A
      y2038: allow disabling time32 system calls · 942437c9
      Arnd Bergmann 提交于
      At the moment, the compilation of the old time32 system calls depends
      purely on the architecture. As systems with new libc based on 64-bit
      time_t are getting deployed, even architectures that previously supported
      these (notably x86-32 and arm32 but also many others) no longer depend on
      them, and removing them from a kernel image results in a smaller kernel
      binary, the same way we can leave out many other optional system calls.
      
      More importantly, on an embedded system that needs to keep working
      beyond year 2038, any user space program calling these system calls
      is likely a bug, so removing them from the kernel image does provide
      an extra debugging help for finding broken applications.
      
      I've gone back and forth on hiding this option unless CONFIG_EXPERT
      is set. This version leaves it visible based on the logic that
      eventually it will be turned off indefinitely.
      Acked-by: NChristian Brauner <christian.brauner@ubuntu.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      942437c9
    • A
      y2038: remove CONFIG_64BIT_TIME · 3ca47e95
      Arnd Bergmann 提交于
      The CONFIG_64BIT_TIME option is defined on all architectures, and can
      be removed for simplicity now.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      3ca47e95
  14. 25 9月, 2019 2 次提交
  15. 07 9月, 2019 1 次提交
  16. 04 9月, 2019 1 次提交
    • C
      dma-mapping: remove CONFIG_ARCH_NO_COHERENT_DMA_MMAP · 62fcee9a
      Christoph Hellwig 提交于
      CONFIG_ARCH_NO_COHERENT_DMA_MMAP is now functionally identical to
      !CONFIG_MMU, so remove the separate symbol.  The only difference is that
      arm did not set it for !CONFIG_MMU, but arm uses a separate dma mapping
      implementation including its own mmap method, which is handled by moving
      the CONFIG_MMU check in dma_can_mmap so that is only applies to the
      dma-direct case, just as the other ifdefs for it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	# m68k
      62fcee9a
  17. 22 8月, 2019 1 次提交
  18. 09 8月, 2019 1 次提交
  19. 05 8月, 2019 1 次提交
  20. 01 8月, 2019 1 次提交
  21. 19 7月, 2019 1 次提交
  22. 04 7月, 2019 1 次提交
  23. 03 6月, 2019 1 次提交
  24. 28 5月, 2019 1 次提交
    • S
      ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS · 86b3de60
      Steven Rostedt (VMware) 提交于
      Commit c19fa94a ("Add HAVE_64BIT_ALIGNED_ACCESS") added the config for
      architectures that required 64bit aligned access for all 64bit words. As
      the ftrace ring buffer stores data on 4 byte alignment, this config option
      was used to force it to store data on 8 byte alignment to make sure the data
      being stored and written directly into the ring buffer was 8 byte aligned as
      it would cause issues trying to write an 8 byte word on a 4 not 8 byte
      aligned memory location.
      
      But with the removal of the metag architecture, which was the only
      architecture to use this, there is no architecture supported by Linux that
      requires 8 byte aligne access for all 8 byte words (4 byte alignment is good
      enough). Removing this config can simplify the code a bit.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      86b3de60
  25. 15 5月, 2019 1 次提交
  26. 30 4月, 2019 2 次提交
    • R
      x86/mm/cpa: Add set_direct_map_*() functions · d253ca0c
      Rick Edgecombe 提交于
      Add two new functions set_direct_map_default_noflush() and
      set_direct_map_invalid_noflush() for setting the direct map alias for the
      page to its default valid permissions and to an invalid state that cannot
      be cached in a TLB, respectively. These functions do not flush the TLB.
      
      Note, __kernel_map_pages() does something similar but flushes the TLB and
      doesn't reset the permission bits to default on all architectures.
      
      Also add an ARCH config ARCH_HAS_SET_DIRECT_MAP for specifying whether
      these have an actual implementation or a default empty one.
      Signed-off-by: NRick Edgecombe <rick.p.edgecombe@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <akpm@linux-foundation.org>
      Cc: <ard.biesheuvel@linaro.org>
      Cc: <deneen.t.dock@intel.com>
      Cc: <kernel-hardening@lists.openwall.com>
      Cc: <kristen@linux.intel.com>
      Cc: <linux_dti@icloud.com>
      Cc: <will.deacon@arm.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20190426001143.4983-15-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d253ca0c
    • A
      y2038: Make CONFIG_64BIT_TIME unconditional · f3d96467
      Arnd Bergmann 提交于
      As Stepan Golosunov points out, there is a small mistake in the
      get_timespec64() function in the kernel. It was originally added under the
      assumption that CONFIG_64BIT_TIME would get enabled on all 32-bit and
      64-bit architectures, but when the conversion was done, it was only turned
      on for 32-bit ones.
      
      The effect is that the get_timespec64() function never clears the upper
      half of the tv_nsec field for 32-bit tasks in compat mode. Clearing this is
      required for POSIX compliant behavior of functions that pass a 'timespec'
      structure with a 64-bit tv_sec and a 32-bit tv_nsec, plus uninitialized
      padding.
      
      The easiest fix for linux-5.1 is to just make the Kconfig symbol
      unconditional, as it was originally intended. As a follow-up, the #ifdef
      CONFIG_64BIT_TIME can be removed completely..
      
      Note: for native 32-bit mode, no change is needed, this works as
      designed and user space should never need to clear the upper 32
      bits of the tv_nsec field, in or out of the kernel.
      
      Fixes: 00bf25d6 ("y2038: use time32 syscall names on 32-bit")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Joseph Myers <joseph@codesourcery.com>
      Cc: libc-alpha@sourceware.org
      Cc: linux-api@vger.kernel.org
      Cc: Deepa Dinamani <deepa.kernel@gmail.com>
      Cc: Lukasz Majewski <lukma@denx.de>
      Cc: Stepan Golosunov <stepan@golosunov.pp.ru>
      Link: https://lore.kernel.org/lkml/20190422090710.bmxdhhankurhafxq@sghpc.golosunov.pp.ru/
      Link: https://lkml.kernel.org/r/20190429131951.471701-1-arnd@arndb.de
      f3d96467
  27. 10 4月, 2019 2 次提交
    • W
      locking/rwsem: Enable lock event counting · a8654596
      Waiman Long 提交于
      Add lock event counting calls so that we can track the number of lock
      events happening in the rwsem code.
      
      With CONFIG_LOCK_EVENT_COUNTS on and booting a 4-socket 112-thread x86-64
      system, the rwsem counts after system bootup were as follows:
      
        rwsem_opt_fail=261
        rwsem_opt_wlock=50636
        rwsem_rlock=445
        rwsem_rlock_fail=0
        rwsem_rlock_fast=22
        rwsem_rtrylock=810144
        rwsem_sleep_reader=441
        rwsem_sleep_writer=310
        rwsem_wake_reader=355
        rwsem_wake_writer=2335
        rwsem_wlock=261
        rwsem_wlock_fail=0
        rwsem_wtrylock=20583
      
      It can be seen that most of the lock acquisitions in the slowpath were
      write-locks in the optimistic spinning code path with no sleeping at
      all. For this system, over 97% of the locks are acquired via optimistic
      spinning. It illustrates the importance of optimistic spinning in
      improving the performance of rwsem.
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NDavidlohr Bueso <dbueso@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20190404174320.22416-11-longman@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a8654596
    • W
      locking/lock_events: Make lock_events available for all archs & other locks · fb346fd9
      Waiman Long 提交于
      The QUEUED_LOCK_STAT option to report queued spinlocks event counts
      was previously allowed only on x86 architecture. To make the locking
      event counting code more useful, it is now renamed to a more generic
      LOCK_EVENT_COUNTS config option. This new option will be available to
      all the architectures that use qspinlock at the moment.
      
      Other locking code can now start to use the generic locking event
      counting code by including lock_events.h and put the new locking event
      names into the lock_events_list.h header file.
      
      My experience with lock event counting is that it gives valuable insight
      on how the locking code works and what can be done to make it better. I
      would like to extend this benefit to other locking code like mutex and
      rwsem in the near future.
      
      The PV qspinlock specific code will stay in qspinlock_stat.h. The
      locking event counters will now reside in the <debugfs>/lock_event_counts
      directory.
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NDavidlohr Bueso <dbueso@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20190404174320.22416-9-longman@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fb346fd9
  28. 03 4月, 2019 1 次提交