1. 21 11月, 2020 1 次提交
  2. 09 10月, 2020 1 次提交
  3. 16 9月, 2020 1 次提交
    • N
      mm: fix exec activate_mm vs TLB shootdown and lazy tlb switching race · d53c3dfb
      Nicholas Piggin 提交于
      Reading and modifying current->mm and current->active_mm and switching
      mm should be done with irqs off, to prevent races seeing an intermediate
      state.
      
      This is similar to commit 38cf307c ("mm: fix kthread_use_mm() vs TLB
      invalidate"). At exec-time when the new mm is activated, the old one
      should usually be single-threaded and no longer used, unless something
      else is holding an mm_users reference (which may be possible).
      
      Absent other mm_users, there is also a race with preemption and lazy tlb
      switching. Consider the kernel_execve case where the current thread is
      using a lazy tlb active mm:
      
        call_usermodehelper()
          kernel_execve()
            old_mm = current->mm;
            active_mm = current->active_mm;
            *** preempt *** -------------------->  schedule()
                                                     prev->active_mm = NULL;
                                                     mmdrop(prev active_mm);
                                                   ...
                            <--------------------  schedule()
            current->mm = mm;
            current->active_mm = mm;
            if (!old_mm)
                mmdrop(active_mm);
      
      If we switch back to the kernel thread from a different mm, there is a
      double free of the old active_mm, and a missing free of the new one.
      
      Closing this race only requires interrupts to be disabled while ->mm
      and ->active_mm are being switched, but the TLB problem requires also
      holding interrupts off over activate_mm. Unfortunately not all archs
      can do that yet, e.g., arm defers the switch if irqs are disabled and
      expects finish_arch_post_lock_switch() to be called to complete the
      flush; um takes a blocking lock in activate_mm().
      
      So as a first step, disable interrupts across the mm/active_mm updates
      to close the lazy tlb preempt race, and provide an arch option to
      extend that to activate_mm which allows architectures doing IPI based
      TLB shootdowns to close the second race.
      
      This is a bit ugly, but in the interest of fixing the bug and backporting
      before all architectures are converted this is a compromise.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20200914045219.3736466-2-npiggin@gmail.com
      d53c3dfb
  4. 09 9月, 2020 1 次提交
  5. 01 9月, 2020 3 次提交
  6. 06 8月, 2020 1 次提交
  7. 24 7月, 2020 1 次提交
    • T
      entry: Provide generic syscall entry functionality · 142781e1
      Thomas Gleixner 提交于
      On syscall entry certain work needs to be done:
      
         - Establish state (lockdep, context tracking, tracing)
         - Conditional work (ptrace, seccomp, audit...)
      
      This code is needlessly duplicated and  different in all
      architectures.
      
      Provide a generic version based on the x86 implementation which has all the
      RCU and instrumentation bits right.
      
      As interrupt/exception entry from user space needs parts of the same
      functionality, provide a function for this as well.
      
      syscall_enter_from_user_mode() and irqentry_enter_from_user_mode() must be
      called right after the low level ASM entry. The calling code must be
      non-instrumentable. After the functions returns state is correct and the
      subsequent functions can be instrumented.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NKees Cook <keescook@chromium.org>
      Link: https://lkml.kernel.org/r/20200722220519.513463269@linutronix.de
      142781e1
  8. 07 7月, 2020 1 次提交
  9. 05 7月, 2020 1 次提交
  10. 27 6月, 2020 1 次提交
  11. 14 6月, 2020 1 次提交
    • M
      treewide: replace '---help---' in Kconfig files with 'help' · a7f7f624
      Masahiro Yamada 提交于
      Since commit 84af7a61 ("checkpatch: kconfig: prefer 'help' over
      '---help---'"), the number of '---help---' has been gradually
      decreasing, but there are still more than 2400 instances.
      
      This commit finishes the conversion. While I touched the lines,
      I also fixed the indentation.
      
      There are a variety of indentation styles found.
      
        a) 4 spaces + '---help---'
        b) 7 spaces + '---help---'
        c) 8 spaces + '---help---'
        d) 1 space + 1 tab + '---help---'
        e) 1 tab + '---help---'    (correct indentation)
        f) 1 tab + 1 space + '---help---'
        g) 1 tab + 2 spaces + '---help---'
      
      In order to convert all of them to 1 tab + 'help', I ran the
      following commend:
      
        $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      a7f7f624
  12. 19 5月, 2020 1 次提交
  13. 15 5月, 2020 2 次提交
    • S
      scs: Disable when function graph tracing is enabled · ddc9863e
      Sami Tolvanen 提交于
      The graph tracer hooks returns by modifying frame records on the
      (regular) stack, but with SCS the return address is taken from the
      shadow stack, and the value in the frame record has no effect. As we
      don't currently have a mechanism to determine the corresponding slot
      on the shadow stack (and to pass this through the ftrace
      infrastructure), for now let's disable SCS when the graph tracer is
      enabled.
      
      With SCS the return address is taken from the shadow stack and the
      value in the frame record has no effect. The mcount based graph tracer
      hooks returns by modifying frame records on the (regular) stack, and
      thus is not compatible. The patchable-function-entry graph tracer
      used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved
      to the shadow stack, and is compatible.
      
      Modifying the mcount based graph tracer to work with SCS would require
      a mechanism to determine the corresponding slot on the shadow stack
      (and to pass this through the ftrace infrastructure), and we expect
      that everyone will eventually move to the patchable-function-entry
      based graph tracer anyway, so for now let's disable SCS when the
      mcount-based graph tracer is enabled.
      
      SCS and patchable-function-entry are both supported from LLVM 10.x.
      Signed-off-by: NSami Tolvanen <samitolvanen@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      ddc9863e
    • S
      scs: Add support for Clang's Shadow Call Stack (SCS) · d08b9f0c
      Sami Tolvanen 提交于
      This change adds generic support for Clang's Shadow Call Stack,
      which uses a shadow stack to protect return addresses from being
      overwritten by an attacker. Details are available here:
      
        https://clang.llvm.org/docs/ShadowCallStack.html
      
      Note that security guarantees in the kernel differ from the ones
      documented for user space. The kernel must store addresses of
      shadow stacks in memory, which means an attacker capable reading
      and writing arbitrary memory may be able to locate them and hijack
      control flow by modifying the stacks.
      Signed-off-by: NSami Tolvanen <samitolvanen@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      [will: Numerous cosmetic changes]
      Signed-off-by: NWill Deacon <will@kernel.org>
      d08b9f0c
  14. 13 5月, 2020 1 次提交
  15. 16 3月, 2020 3 次提交
  16. 06 3月, 2020 1 次提交
  17. 14 2月, 2020 1 次提交
    • F
      context-tracking: Introduce CONFIG_HAVE_TIF_NOHZ · 490f561b
      Frederic Weisbecker 提交于
      A few archs (x86, arm, arm64) don't rely anymore on TIF_NOHZ to call
      into context tracking on user entry/exit but instead use static keys
      (or not) to optimize those calls. Ideally every arch should migrate to
      that behaviour in the long run.
      
      Settle a config option to let those archs remove their TIF_NOHZ
      definitions.
      Signed-off-by: NFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      490f561b
  18. 04 2月, 2020 6 次提交
  19. 05 12月, 2019 1 次提交
  20. 02 12月, 2019 1 次提交
  21. 25 11月, 2019 1 次提交
  22. 23 11月, 2019 1 次提交
  23. 15 11月, 2019 2 次提交
    • A
      y2038: allow disabling time32 system calls · 942437c9
      Arnd Bergmann 提交于
      At the moment, the compilation of the old time32 system calls depends
      purely on the architecture. As systems with new libc based on 64-bit
      time_t are getting deployed, even architectures that previously supported
      these (notably x86-32 and arm32 but also many others) no longer depend on
      them, and removing them from a kernel image results in a smaller kernel
      binary, the same way we can leave out many other optional system calls.
      
      More importantly, on an embedded system that needs to keep working
      beyond year 2038, any user space program calling these system calls
      is likely a bug, so removing them from the kernel image does provide
      an extra debugging help for finding broken applications.
      
      I've gone back and forth on hiding this option unless CONFIG_EXPERT
      is set. This version leaves it visible based on the logic that
      eventually it will be turned off indefinitely.
      Acked-by: NChristian Brauner <christian.brauner@ubuntu.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      942437c9
    • A
      y2038: remove CONFIG_64BIT_TIME · 3ca47e95
      Arnd Bergmann 提交于
      The CONFIG_64BIT_TIME option is defined on all architectures, and can
      be removed for simplicity now.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      3ca47e95
  24. 25 9月, 2019 2 次提交
  25. 07 9月, 2019 1 次提交
  26. 04 9月, 2019 1 次提交
    • C
      dma-mapping: remove CONFIG_ARCH_NO_COHERENT_DMA_MMAP · 62fcee9a
      Christoph Hellwig 提交于
      CONFIG_ARCH_NO_COHERENT_DMA_MMAP is now functionally identical to
      !CONFIG_MMU, so remove the separate symbol.  The only difference is that
      arm did not set it for !CONFIG_MMU, but arm uses a separate dma mapping
      implementation including its own mmap method, which is handled by moving
      the CONFIG_MMU check in dma_can_mmap so that is only applies to the
      dma-direct case, just as the other ifdefs for it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	# m68k
      62fcee9a
  27. 22 8月, 2019 1 次提交
  28. 09 8月, 2019 1 次提交