1. 20 7月, 2018 3 次提交
  2. 13 7月, 2018 1 次提交
  3. 11 7月, 2018 1 次提交
    • M
      timekeeping: Update multiplier when NTP frequency is set directly · b061c7a5
      Miroslav Lichvar 提交于
      When the NTP frequency is set directly from userspace using the
      ADJ_FREQUENCY or ADJ_TICK timex mode, immediately update the
      timekeeper's multiplier instead of waiting for the next tick.
      
      This removes a hidden non-deterministic delay in setting of the
      frequency and allows an extremely tight control of the system clock
      with update rates close to or even exceeding the kernel HZ.
      
      The update is limited to archs using modern timekeeping
      (!ARCH_USES_GETTIMEOFFSET).
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Miroslav Lichvar <mlichvar@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Stephen Boyd <sboyd@kernel.org>
      Signed-off-by: NMiroslav Lichvar <mlichvar@redhat.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      b061c7a5
  4. 02 7月, 2018 3 次提交
  5. 28 6月, 2018 1 次提交
  6. 27 6月, 2018 1 次提交
  7. 24 6月, 2018 3 次提交
  8. 23 6月, 2018 1 次提交
    • W
      rseq: Avoid infinite recursion when delivering SIGSEGV · 784e0300
      Will Deacon 提交于
      When delivering a signal to a task that is using rseq, we call into
      __rseq_handle_notify_resume() so that the registers pushed in the
      sigframe are updated to reflect the state of the restartable sequence
      (for example, ensuring that the signal returns to the abort handler if
      necessary).
      
      However, if the rseq management fails due to an unrecoverable fault when
      accessing userspace or certain combinations of RSEQ_CS_* flags, then we
      will attempt to deliver a SIGSEGV. This has the potential for infinite
      recursion if the rseq code continuously fails on signal delivery.
      
      Avoid this problem by using force_sigsegv() instead of force_sig(), which
      is explicitly designed to reset the SEGV handler to SIG_DFL in the case
      of a recursive fault. In doing so, remove rseq_signal_deliver() from the
      internal rseq API and have an optional struct ksignal * parameter to
      rseq_handle_notify_resume() instead.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: peterz@infradead.org
      Cc: paulmck@linux.vnet.ibm.com
      Cc: boqun.feng@gmail.com
      Link: https://lkml.kernel.org/r/1529664307-983-1-git-send-email-will.deacon@arm.com
      784e0300
  9. 22 6月, 2018 7 次提交
  10. 20 6月, 2018 1 次提交
  11. 19 6月, 2018 5 次提交
  12. 16 6月, 2018 5 次提交
  13. 15 6月, 2018 7 次提交
    • M
      sched/core / kcov: avoid kcov_area during task switch · 0ed557aa
      Mark Rutland 提交于
      During a context switch, we first switch_mm() to the next task's mm,
      then switch_to() that new task.  This means that vmalloc'd regions which
      had previously been faulted in can transiently disappear in the context
      of the prev task.
      
      Functions instrumented by KCOV may try to access a vmalloc'd kcov_area
      during this window, and as the fault handling code is instrumented, this
      results in a recursive fault.
      
      We must avoid accessing any kcov_area during this window.  We can do so
      with a new flag in kcov_mode, set prior to switching the mm, and cleared
      once the new task is live.  Since task_struct::kcov_mode isn't always a
      specific enum kcov_mode value, this is made an unsigned int.
      
      The manipulation is hidden behind kcov_{prepare,finish}_switch() helpers,
      which are empty for !CONFIG_KCOV kernels.
      
      The code uses macros because I can't use static inline functions without a
      circular include dependency between <linux/sched.h> and <linux/kcov.h>,
      since the definition of task_struct uses things defined in <linux/kcov.h>
      
      Link: http://lkml.kernel.org/r/20180504135535.53744-4-mark.rutland@arm.comSigned-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ed557aa
    • M
      kcov: prefault the kcov_area · dc55daff
      Mark Rutland 提交于
      On many architectures the vmalloc area is lazily faulted in upon first
      access.  This is problematic for KCOV, as __sanitizer_cov_trace_pc
      accesses the (vmalloc'd) kcov_area, and fault handling code may be
      instrumented.  If an access to kcov_area faults, this will result in
      mutual recursion through the fault handling code and
      __sanitizer_cov_trace_pc(), eventually leading to stack corruption
      and/or overflow.
      
      We can avoid this by faulting in the kcov_area before
      __sanitizer_cov_trace_pc() is permitted to access it.  Once it has been
      faulted in, it will remain present in the process page tables, and will
      not fault again.
      
      [akpm@linux-foundation.org: code cleanup]
      [akpm@linux-foundation.org: add comment explaining kcov_fault_in_area()]
      [akpm@linux-foundation.org: fancier code comment from Mark]
      Link: http://lkml.kernel.org/r/20180504135535.53744-3-mark.rutland@arm.comSigned-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dc55daff
    • M
      kcov: ensure irq code sees a valid area · c9484b98
      Mark Rutland 提交于
      Patch series "kcov: fix unexpected faults".
      
      These patches fix a few issues where KCOV code could trigger recursive
      faults, discovered while debugging a patch enabling KCOV for arch/arm:
      
      * On CONFIG_PREEMPT kernels, there's a small race window where
        __sanitizer_cov_trace_pc() can see a bogus kcov_area.
      
      * Lazy faulting of the vmalloc area can cause mutual recursion between
        fault handling code and __sanitizer_cov_trace_pc().
      
      * During the context switch, switching the mm can cause the kcov_area to
        be transiently unmapped.
      
      These are prerequisites for enabling KCOV on arm, but the issues
      themsevles are generic -- we just happen to avoid them by chance rather
      than design on x86-64 and arm64.
      
      This patch (of 3):
      
      For kernels built with CONFIG_PREEMPT, some C code may execute before or
      after the interrupt handler, while the hardirq count is zero.  In these
      cases, in_task() can return true.
      
      A task can be interrupted in the middle of a KCOV_DISABLE ioctl while it
      resets the task's kcov data via kcov_task_init().  Instrumented code
      executed during this period will call __sanitizer_cov_trace_pc(), and as
      in_task() returns true, will inspect t->kcov_mode before trying to write
      to t->kcov_area.
      
      In kcov_init_task() we update t->kcov_{mode,area,size} with plain stores,
      which may be re-ordered, torn, etc.  Thus __sanitizer_cov_trace_pc() may
      see bogus values for any of these fields, and may attempt to write to
      memory which is not mapped.
      
      Let's avoid this by using WRITE_ONCE() to set t->kcov_mode, with a
      barrier() to ensure this is ordered before we clear t->kov_{area,size}.
      This ensures that any code execute while kcov_init_task() is preempted
      will either see valid values for t->kcov_{area,size}, or will see that
      t->kcov_mode is KCOV_MODE_DISABLED, and bail out without touching
      t->kcov_area.
      
      Link: http://lkml.kernel.org/r/20180504135535.53744-2-mark.rutland@arm.comSigned-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c9484b98
    • S
      kernel/relay.c: change return type to vm_fault_t · 3fb3894b
      Souptick Joarder 提交于
      Use new return type vm_fault_t for fault handler.  For now, this is just
      documenting that the function returns a VM_FAULT value rather than an
      errno.  Once all instances are converted, vm_fault_t will become a
      distinct type.
      
      commit 1c8f4220 ("mm: change return type to vm_fault_t")
      
      Link: http://lkml.kernel.org/r/20180510140335.GA25363@jordon-HP-15-Notebook-PCSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com>
      Reviewed-by: NMatthew Wilcox <mawilcox@microsoft.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Eric Biggers <ebiggers@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3fb3894b
    • T
      mm: check for SIGKILL inside dup_mmap() loop · 655c79bb
      Tetsuo Handa 提交于
      As a theoretical problem, dup_mmap() of an mm_struct with 60000+ vmas
      can loop while potentially allocating memory, with mm->mmap_sem held for
      write by current thread.  This is bad if current thread was selected as
      an OOM victim, for current thread will continue allocations using memory
      reserves while OOM reaper is unable to reclaim memory.
      
      As an actually observable problem, it is not difficult to make OOM
      reaper unable to reclaim memory if the OOM victim is blocked at
      i_mmap_lock_write() in this loop.  Unfortunately, since nobody can
      explain whether it is safe to use killable wait there, let's check for
      SIGKILL before trying to allocate memory.  Even without an OOM event,
      there is no point with continuing the loop from the beginning if current
      thread is killed.
      
      I tested with debug printk().  This patch should be safe because we
      already fail if security_vm_enough_memory_mm() or
      kmem_cache_alloc(GFP_KERNEL) fails and exit_mmap() handles it.
      
         ***** Aborting dup_mmap() due to SIGKILL *****
         ***** Aborting dup_mmap() due to SIGKILL *****
         ***** Aborting dup_mmap() due to SIGKILL *****
         ***** Aborting dup_mmap() due to SIGKILL *****
         ***** Aborting exit_mmap() due to NULL mmap *****
      
      [akpm@linux-foundation.org: add comment]
      Link: http://lkml.kernel.org/r/201804071938.CDE04681.SOFVQJFtMHOOLF@I-love.SAKURA.ne.jpSigned-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      655c79bb
    • J
      kexec: yield to scheduler when loading kimage segments · a8311f64
      Jarrett Farnitano 提交于
      Without yielding while loading kimage segments, a large initrd will
      block all other work on the CPU performing the load until it is
      completed.  For example loading an initrd of 200MB on a low power single
      core system will lock up the system for a few seconds.
      
      To increase system responsiveness to other tasks at that time, call
      cond_resched() in both the crash kernel and normal kernel segment
      loading loops.
      
      I did run into a practical problem.  Hardware watchdogs on embedded
      systems can have short timers on the order of seconds.  If the system is
      locked up for a few seconds with only a single core available, the
      watchdog may not be pet in a timely fashion.  If this happens, the
      hardware watchdog will fire and reset the system.
      
      This really only becomes a problem when you are working with a single
      core, a decently sized initrd, and have a constrained hardware watchdog.
      
      Link: http://lkml.kernel.org/r/1528738546-3328-1-git-send-email-jmf@amazon.comSigned-off-by: NJarrett Farnitano <jmf@amazon.com>
      Reviewed-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8311f64
    • M
      kconfig: tinyconfig: remove stale stack protector fixups · a0f8c297
      Masahiro Yamada 提交于
      Prior to commit 2a61f474 ("stack-protector: test compiler capability
      in Kconfig and drop AUTO mode"), the stack protector was configured by
      the choice of NONE, REGULAR, STRONG, AUTO.
      
      tiny.config needed to explicitly set NONE because the default value of
      choice, AUTO, did not produce the tiniest kernel.
      
      Now that there are only two boolean symbols, STACKPROTECTOR and
      STACKPROTECTOR_STRONG, they are naturally disabled by "make
      allnoconfig", which "make tinyconfig" is based on.  Remove unnecessary
      lines from the tiny.config fragment file.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a0f8c297
  14. 14 6月, 2018 1 次提交
    • C
      dma-mapping: move all DMA mapping code to kernel/dma · cf65a0f6
      Christoph Hellwig 提交于
      Currently the code is split over various files with dma- prefixes in the
      lib/ and drives/base directories, and the number of files keeps growing.
      Move them into a single directory to keep the code together and remove
      the file name prefixes.  To match the irq infrastructure this directory
      is placed under the kernel/ directory.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      cf65a0f6