1. 22 4月, 2016 1 次提交
    • S
      cpu/hotplug: Fix rollback during error-out in __cpu_disable() · 3b9d6da6
      Sebastian Andrzej Siewior 提交于
      The recent introduction of the hotplug thread which invokes the callbacks on
      the plugged cpu, cased the following regression:
      
      If takedown_cpu() fails, then we run into several issues:
      
       1) The rollback of the target cpu states is not invoked. That leaves the smp
          threads and the hotplug thread in disabled state.
      
       2) notify_online() is executed due to a missing skip_onerr flag. That causes
          that both CPU_DOWN_FAILED and CPU_ONLINE notifications are invoked which
          confuses quite some notifiers.
      
       3) The CPU_DOWN_FAILED notification is not invoked on the target CPU. That's
          not an issue per se, but it is inconsistent and in consequence blocks the
          patches which rely on these states being invoked on the target CPU and not
          on the controlling cpu. It also does not preserve the strict call order on
          rollback which is problematic for the ongoing state machine conversion as
          well.
      
      To fix this we add a rollback flag to the remote callback machinery and invoke
      the rollback including the CPU_DOWN_FAILED notification on the remote
      cpu. Further mark the notify online state with 'skip_onerr' so we don't get a
      double invokation.
      
      This workaround will go away once we moved the unplug invocation to the target
      cpu itself.
      
      [ tglx: Massaged changelog and moved the CPU_DOWN_FAILED notifiaction to the
        	target cpu ]
      
      Fixes: 4cb28ced ("cpu/hotplug: Create hotplug threads")
      Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: linux-s390@vger.kernel.org
      Cc: rt@linutronix.de
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Anna-Maria Gleixner <anna-maria@linutronix.de>
      Link: http://lkml.kernel.org/r/20160408124015.GA21960@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      3b9d6da6
  2. 21 4月, 2016 2 次提交
  3. 20 4月, 2016 1 次提交
  4. 19 4月, 2016 1 次提交
    • D
      locking/pvqspinlock: Fix division by zero in qstat_read() · 66876595
      Davidlohr Bueso 提交于
      While playing with the qstat statistics (in <debugfs>/qlockstat/) I ran into
      the following splat on a VM when opening pv_hash_hops:
      
        divide error: 0000 [#1] SMP
        ...
        RIP: 0010:[<ffffffff810b61fe>]  [<ffffffff810b61fe>] qstat_read+0x12e/0x1e0
        ...
        Call Trace:
          [<ffffffff811cad7c>] ? mem_cgroup_commit_charge+0x6c/0xd0
          [<ffffffff8119750c>] ? page_add_new_anon_rmap+0x8c/0xd0
          [<ffffffff8118d3b9>] ? handle_mm_fault+0x1439/0x1b40
          [<ffffffff811937a9>] ? do_mmap+0x449/0x550
          [<ffffffff811d3de3>] ? __vfs_read+0x23/0xd0
          [<ffffffff811d4ab2>] ? rw_verify_area+0x52/0xd0
          [<ffffffff811d4bb1>] ? vfs_read+0x81/0x120
          [<ffffffff811d5f12>] ? SyS_read+0x42/0xa0
          [<ffffffff815720f6>] ? entry_SYSCALL_64_fastpath+0x1e/0xa8
      
      Fix this by verifying that qstat_pv_kick_unlock is in fact non-zero,
      similarly to what the qstat_pv_latency_wake case does, as if nothing
      else, this can come from resetting the statistics, thus having 0 kicks
      should be quite valid in this context.
      Signed-off-by: NDavidlohr Bueso <dbueso@suse.de>
      Reviewed-by: NWaiman Long <Waiman.Long@hpe.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: dave@stgolabs.net
      Cc: waiman.long@hpe.com
      Link: http://lkml.kernel.org/r/1460961103-24953-1-git-send-email-dave@stgolabs.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      66876595
  5. 15 4月, 2016 1 次提交
  6. 14 4月, 2016 1 次提交
  7. 09 4月, 2016 1 次提交
    • R
      cpufreq: Call cpufreq_disable_fast_switch() in sugov_exit() · 6c9d9c81
      Rafael J. Wysocki 提交于
      Due to differences in the cpufreq core's handling of runtime CPU
      offline and nonboot CPUs disabling during system suspend-to-RAM,
      fast frequency switching gets disabled after a suspend-to-RAM and
      resume cycle on all of the nonboot CPUs.
      
      To prevent that from happening, move the invocation of
      cpufreq_disable_fast_switch() from cpufreq_exit_governor() to
      sugov_exit(), as the schedutil governor is the only user of fast
      frequency switching today anyway.
      
      That simply prevents cpufreq_disable_fast_switch() from being called
      without invoking the ->governor callback for the CPUFREQ_GOV_POLICY_EXIT
      event (which happens during system suspend now).
      
      Fixes: b7898fda (cpufreq: Support for fast frequency switching)
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      6c9d9c81
  8. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  9. 04 4月, 2016 1 次提交
  10. 02 4月, 2016 2 次提交
    • R
      cpufreq: schedutil: New governor based on scheduler utilization data · 9bdcb44e
      Rafael J. Wysocki 提交于
      Add a new cpufreq scaling governor, called "schedutil", that uses
      scheduler-provided CPU utilization information as input for making
      its decisions.
      
      Doing that is possible after commit 34e2c555 (cpufreq: Add
      mechanism for registering utilization update callbacks) that
      introduced cpufreq_update_util() called by the scheduler on
      utilization changes (from CFS) and RT/DL task status updates.
      In particular, CPU frequency scaling decisions may be based on
      the the utilization data passed to cpufreq_update_util() by CFS.
      
      The new governor is relatively simple.
      
      The frequency selection formula used by it depends on whether or not
      the utilization is frequency-invariant.  In the frequency-invariant
      case the new CPU frequency is given by
      
      	next_freq = 1.25 * max_freq * util / max
      
      where util and max are the last two arguments of cpufreq_update_util().
      In turn, if util is not frequency-invariant, the maximum frequency in
      the above formula is replaced with the current frequency of the CPU:
      
      	next_freq = 1.25 * curr_freq * util / max
      
      The coefficient 1.25 corresponds to the frequency tipping point at
      (util / max) = 0.8.
      
      All of the computations are carried out in the utilization update
      handlers provided by the new governor.  One of those handlers is
      used for cpufreq policies shared between multiple CPUs and the other
      one is for policies with one CPU only (and therefore it doesn't need
      to use any extra synchronization means).
      
      The governor supports fast frequency switching if that is supported
      by the cpufreq driver in use and possible for the given policy.
      In the fast switching case, all operations of the governor take
      place in its utilization update handlers.  If fast switching cannot
      be used, the frequency switch operations are carried out with the
      help of a work item which only calls __cpufreq_driver_target()
      (under a mutex) to trigger a frequency update (to a value already
      computed beforehand in one of the utilization update handlers).
      
      Currently, the governor treats all of the RT and DL tasks as
      "unknown utilization" and sets the frequency to the allowed
      maximum when updated from the RT or DL sched classes.  That
      heavy-handed approach should be replaced with something more
      subtle and specifically targeted at RT and DL tasks.
      
      The governor shares some tunables management code with the
      "ondemand" and "conservative" governors and uses some common
      definitions from cpufreq_governor.h, but apart from that it
      is stand-alone.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      9bdcb44e
    • R
      cpufreq: sched: Helpers to add and remove update_util hooks · 0bed612b
      Rafael J. Wysocki 提交于
      Replace the single helper for adding and removing cpufreq utilization
      update hooks, cpufreq_set_update_util_data(), with a pair of helpers,
      cpufreq_add_update_util_hook() and cpufreq_remove_update_util_hook(),
      and modify the users of cpufreq_set_update_util_data() accordingly.
      
      With the new helpers, the code using them doesn't need to worry
      about the internals of struct update_util_data and in particular
      it doesn't need to worry about populating the func field in it
      properly upfront.
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      0bed612b
  11. 31 3月, 2016 3 次提交
    • A
      locking/lockdep: Print chain_key collision information · 39e2e173
      Alfredo Alvarez Fernandez 提交于
      A sequence of pairs [class_idx -> corresponding chain_key iteration]
      is printed for both the current held_lock chain and the cached chain.
      
      That exposes the two different class_idx sequences that led to that
      particular hash value.
      
      This helps with debugging hash chain collision reports.
      Signed-off-by: NAlfredo Alvarez Fernandez <alfredoalvarezfernandez@gmail.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: sedat.dilek@gmail.com
      Cc: tytso@mit.edu
      Link: http://lkml.kernel.org/r/1459357416-19190-1-git-send-email-alfredoalvarezernandez@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      39e2e173
    • A
      perf/core: Don't leak event in the syscall error path · 201c2f85
      Alexander Shishkin 提交于
      In the error path, event_file not being NULL is used to determine
      whether the event itself still needs to be free'd, so fix it up to
      avoid leaking.
      Reported-by: NLeon Yu <chianglungyu@gmail.com>
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 13005627 ("perf: Do not double free")
      Link: http://lkml.kernel.org/r/87twk06yxp.fsf@ashishki-desk.ger.corp.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      201c2f85
    • P
      perf/core: Fix time tracking bug with multiplexing · 8fdc6539
      Peter Zijlstra 提交于
      Stephane reported that commit:
      
        3cbaa590 ("perf: Fix ctx time tracking by introducing EVENT_TIME")
      
      introduced a regression wrt. time tracking, as easily observed by:
      
      > This patch introduce a bug in the time tracking of events when
      > multiplexing is used.
      >
      > The issue is easily reproducible with the following perf run:
      >
      >  $ perf stat -a -C 0 -e branches,branches,branches,branches,branches,branches -I 1000
      >      1.000730239            652,394      branches   (66.41%)
      >      1.000730239            597,809      branches   (66.41%)
      >      1.000730239            593,870      branches   (66.63%)
      >      1.000730239            651,440      branches   (67.03%)
      >      1.000730239            656,725      branches   (66.96%)
      >      1.000730239      <not counted>      branches
      >
      > One branches event is shown as not having run. Yet, with
      > multiplexing, all events should run especially with a 1s (-I 1000)
      > interval. The delta for time_running comes out to 0. Yet, the event
      > has run because the kernel is actually multiplexing the events. The
      > problem is that the time tracking is the kernel and especially in
      > ctx_sched_out() is wrong now.
      >
      > The problem is that in case that the kernel enters ctx_sched_out() with the
      > following state:
      >    ctx->is_active=0x7 event_type=0x1
      >    Call Trace:
      >     [<ffffffff813ddd41>] dump_stack+0x63/0x82
      >     [<ffffffff81182bdc>] ctx_sched_out+0x2bc/0x2d0
      >     [<ffffffff81183896>] perf_mux_hrtimer_handler+0xf6/0x2c0
      >     [<ffffffff811837a0>] ? __perf_install_in_context+0x130/0x130
      >     [<ffffffff810f5818>] __hrtimer_run_queues+0xf8/0x2f0
      >     [<ffffffff810f6097>] hrtimer_interrupt+0xb7/0x1d0
      >     [<ffffffff810509a8>] local_apic_timer_interrupt+0x38/0x60
      >     [<ffffffff8175ca9d>] smp_apic_timer_interrupt+0x3d/0x50
      >     [<ffffffff8175ac7c>] apic_timer_interrupt+0x8c/0xa0
      >
      > In that case, the test:
      >       if (is_active & EVENT_TIME)
      >
      > will be false and the time will not be updated. Time must always be updated on
      > sched out.
      
      Fix this by always updating time if EVENT_TIME was set, as opposed to
      only updating time when EVENT_TIME changed.
      Reported-by: NStephane Eranian <eranian@google.com>
      Tested-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: kan.liang@intel.com
      Cc: namhyung@kernel.org
      Fixes: 3cbaa590 ("perf: Fix ctx time tracking by introducing EVENT_TIME")
      Link: http://lkml.kernel.org/r/20160329072644.GB3408@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8fdc6539
  12. 29 3月, 2016 2 次提交
  13. 26 3月, 2016 3 次提交
    • A
      arch, ftrace: for KASAN put hard/soft IRQ entries into separate sections · be7635e7
      Alexander Potapenko 提交于
      KASAN needs to know whether the allocation happens in an IRQ handler.
      This lets us strip everything below the IRQ entry point to reduce the
      number of unique stack traces needed to be stored.
      
      Move the definition of __irq_entry to <linux/interrupt.h> so that the
      users don't need to pull in <linux/ftrace.h>.  Also introduce the
      __softirq_entry macro which is similar to __irq_entry, but puts the
      corresponding functions to the .softirqentry.text section.
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be7635e7
    • M
      oom: clear TIF_MEMDIE after oom_reaper managed to unmap the address space · 36324a99
      Michal Hocko 提交于
      When oom_reaper manages to unmap all the eligible vmas there shouldn't
      be much of the freable memory held by the oom victim left anymore so it
      makes sense to clear the TIF_MEMDIE flag for the victim and allow the
      OOM killer to select another task.
      
      The lack of TIF_MEMDIE also means that the victim cannot access memory
      reserves anymore but that shouldn't be a problem because it would get
      the access again if it needs to allocate and hits the OOM killer again
      due to the fatal_signal_pending resp.  PF_EXITING check.  We can safely
      hide the task from the OOM killer because it is clearly not a good
      candidate anymore as everyhing reclaimable has been torn down already.
      
      This patch will allow to cap the time an OOM victim can keep TIF_MEMDIE
      and thus hold off further global OOM killer actions granted the oom
      reaper is able to take mmap_sem for the associated mm struct.  This is
      not guaranteed now but further steps should make sure that mmap_sem for
      write should be blocked killable which will help to reduce such a lock
      contention.  This is not done by this patch.
      
      Note that exit_oom_victim might be called on a remote task from
      __oom_reap_task now so we have to check and clear the flag atomically
      otherwise we might race and underflow oom_victims or wake up waiters too
      early.
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Suggested-by: NJohannes Weiner <hannes@cmpxchg.org>
      Suggested-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Andrea Argangeli <andrea@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      36324a99
    • A
      sched: add schedule_timeout_idle() · 69b27baf
      Andrew Morton 提交于
      This will be needed in the patch "mm, oom: introduce oom reaper".
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      69b27baf
  14. 25 3月, 2016 1 次提交
  15. 23 3月, 2016 16 次提交
    • L
      PM / sleep: Clear pm_suspend_global_flags upon hibernate · 27614273
      Lukas Wunner 提交于
      When suspending to RAM, waking up and later suspending to disk,
      we gratuitously runtime resume devices after the thaw phase.
      This does not occur if we always suspend to RAM or always to disk.
      
      pm_complete_with_resume_check(), which gets called from
      pci_pm_complete() among others, schedules a runtime resume
      if PM_SUSPEND_FLAG_FW_RESUME is set. The flag is set during
      a suspend-to-RAM cycle. It is cleared at the beginning of
      the suspend-to-RAM cycle but not afterwards and it is not
      cleared during a suspend-to-disk cycle at all. Fix it.
      
      Fixes: ef25ba04 (PM / sleep: Add flags to indicate platform firmware involvement)
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Cc: 4.4+ <stable@vger.kernel.org> # 4.4+
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      27614273
    • J
      kernel/...: convert pr_warning to pr_warn · a395d6a7
      Joe Perches 提交于
      Use the more common logging method with the eventual goal of removing
      pr_warning altogether.
      
      Miscellanea:
      
       - Realign arguments
       - Coalesce formats
       - Add missing space between a few coalesced formats
      Signed-off-by: NJoe Perches <joe@perches.com>
      Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>	[kernel/power/suspend.c]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a395d6a7
    • B
      memremap: add MEMREMAP_WC flag · c907e0eb
      Brian Starkey 提交于
      Add a flag to memremap() for writecombine mappings.  Mappings satisfied
      by this flag will not be cached, however writes may be delayed or
      combined into more efficient bursts.  This is most suitable for buffers
      written sequentially by the CPU for use by other DMA devices.
      Signed-off-by: NBrian Starkey <brian.starkey@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c907e0eb
    • B
      memremap: don't modify flags · cf61e2a1
      Brian Starkey 提交于
      These patches implement a MEMREMAP_WC flag for memremap(), which can be
      used to obtain writecombine mappings.  This is then used for setting up
      dma_coherent_mem regions which use the DMA_MEMORY_MAP flag.
      
      The motivation is to fix an alignment fault on arm64, and the suggestion
      to implement MEMREMAP_WC for this case was made at [1].  That particular
      issue is handled in patch 4, which makes sure that the appropriate
      memset function is used when zeroing allocations mapped as IO memory.
      
      This patch (of 4):
      
      Don't modify the flags input argument to memremap(). MEMREMAP_WB is
      already a special case so we can check for it directly instead of
      clearing flag bits in each mapper.
      Signed-off-by: NBrian Starkey <brian.starkey@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf61e2a1
    • H
      kernel/signal.c: add compile-time check for __ARCH_SI_PREAMBLE_SIZE · 41b27154
      Helge Deller 提交于
      The value of __ARCH_SI_PREAMBLE_SIZE defines the size (including
      padding) of the part of the struct siginfo that is before the union, and
      it is then used to calculate the needed padding (SI_PAD_SIZE) to make
      the size of struct siginfo equal to 128 (SI_MAX_SIZE) bytes.
      
      Depending on the target architecture and word width it equals to either
      3 or 4 times sizeof int.
      
      Since the very beginning we had __ARCH_SI_PREAMBLE_SIZE wrong on the
      parisc architecture for the 64bit kernel build.  It's even more
      frustrating, because it can easily be checked at compile time if the
      value was defined correctly.
      
      This patch adds such a check for the correctness of
      __ARCH_SI_PREAMBLE_SIZE in the hope that it will prevent existing and
      future architectures from running into the same problem.
      
      I refrained from replacing __ARCH_SI_PREAMBLE_SIZE by offsetof() in
      copy_siginfo() in include/asm-generic/siginfo.h, because a) it doesn't
      make any difference and b) it's used in the Documentation/kmemcheck.txt
      example.
      
      I ran this patch through the 0-DAY kernel test infrastructure and only
      the parisc architecture triggered as expected.  That means that this
      patch should be OK for all major architectures.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      41b27154
    • D
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov 提交于
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
    • A
      profile: hide unused functions when !CONFIG_PROC_FS · ade356b9
      Arnd Bergmann 提交于
      A couple of functions and variables in the profile implementation are
      used only on SMP systems by the procfs code, but are unused if either
      procfs is disabled or in uniprocessor kernels.  gcc prints a harmless
      warning about the unused symbols:
      
        kernel/profile.c:243:13: error: 'profile_flip_buffers' defined but not used [-Werror=unused-function]
         static void profile_flip_buffers(void)
                     ^
        kernel/profile.c:266:13: error: 'profile_discard_flip_buffers' defined but not used [-Werror=unused-function]
         static void profile_discard_flip_buffers(void)
                     ^
        kernel/profile.c:330:12: error: 'profile_cpu_callback' defined but not used [-Werror=unused-function]
         static int profile_cpu_callback(struct notifier_block *info,
                    ^
      
      This adds further #ifdef to the file, to annotate exactly in which cases
      they are used.  I have done several thousand ARM randconfig kernels with
      this patch applied and no longer get any warnings in this file.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Robin Holt <robinmholt@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ade356b9
    • H
      panic: change nmi_panic from macro to function · ebc41f20
      Hidehiro Kawai 提交于
      Commit 1717f209 ("panic, x86: Fix re-entrance problem due to panic
      on NMI") and commit 58c5661f ("panic, x86: Allow CPUs to save
      registers even if looping in NMI context") introduced nmi_panic() which
      prevents concurrent/recursive execution of panic().  It also saves
      registers for the crash dump on x86.
      
      However, there are some cases where NMI handlers still use panic().
      This patch set partially replaces them with nmi_panic() in those cases.
      
      Even this patchset is applied, some NMI or similar handlers (e.g.  MCE
      handler) continue to use panic().  This is because I can't test them
      well and actual problems won't happen.  For example, the possibility
      that normal panic and panic on MCE happen simultaneously is very low.
      
      This patch (of 3):
      
      Convert nmi_panic() to a proper function and export it instead of
      exporting internal implementation details to modules, for obvious
      reasons.
      Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
      Cc: Javi Merino <javi.merino@arm.com>
      Cc: Gobinda Charan Maji <gobinda.cemk07@gmail.com>
      Cc: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ebc41f20
    • J
      fs/coredump: prevent fsuid=0 dumps into user-controlled directories · 378c6520
      Jann Horn 提交于
      This commit fixes the following security hole affecting systems where
      all of the following conditions are fulfilled:
      
       - The fs.suid_dumpable sysctl is set to 2.
       - The kernel.core_pattern sysctl's value starts with "/". (Systems
         where kernel.core_pattern starts with "|/" are not affected.)
       - Unprivileged user namespace creation is permitted. (This is
         true on Linux >=3.8, but some distributions disallow it by
         default using a distro patch.)
      
      Under these conditions, if a program executes under secure exec rules,
      causing it to run with the SUID_DUMP_ROOT flag, then unshares its user
      namespace, changes its root directory and crashes, the coredump will be
      written using fsuid=0 and a path derived from kernel.core_pattern - but
      this path is interpreted relative to the root directory of the process,
      allowing the attacker to control where a coredump will be written with
      root privileges.
      
      To fix the security issue, always interpret core_pattern for dumps that
      are written under SUID_DUMP_ROOT relative to the root directory of init.
      Signed-off-by: NJann Horn <jann@thejh.net>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      378c6520
    • O
      ptrace: change __ptrace_unlink() to clear ->ptrace under ->siglock · 1333ab03
      Oleg Nesterov 提交于
      This test-case (simplified version of generated by syzkaller)
      
      	#include <unistd.h>
      	#include <sys/ptrace.h>
      	#include <sys/wait.h>
      
      	void test(void)
      	{
      		for (;;) {
      			if (fork()) {
      				wait(NULL);
      				continue;
      			}
      
      			ptrace(PTRACE_SEIZE, getppid(), 0, 0);
      			ptrace(PTRACE_INTERRUPT, getppid(), 0, 0);
      			_exit(0);
      		}
      	}
      
      	int main(void)
      	{
      		int np;
      
      		for (np = 0; np < 8; ++np)
      			if (!fork())
      				test();
      
      		while (wait(NULL) > 0)
      			;
      		return 0;
      	}
      
      triggers the 2nd WARN_ON_ONCE(!signr) warning in do_jobctl_trap().  The
      problem is that __ptrace_unlink() clears task->jobctl under siglock but
      task->ptrace is cleared without this lock held; this fools the "else"
      branch which assumes that !PT_SEIZED means PT_PTRACED.
      
      Note also that most of other PTRACE_SEIZE checks can race with detach
      from the exiting tracer too.  Say, the callers of ptrace_trap_notify()
      assume that SEIZED can't go away after it was checked.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1333ab03
    • A
      auditsc: for seccomp events, log syscall compat state using in_compat_syscall · efbc0fbf
      Andy Lutomirski 提交于
      Except on SPARC, this is what the code always did.  SPARC compat seccomp
      was buggy, although the impact of the bug was limited because SPARC
      32-bit and 64-bit syscall numbers are the same.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      efbc0fbf
    • A
      ptrace: in PEEK_SIGINFO, check syscall bitness, not task bitness · 5c465217
      Andy Lutomirski 提交于
      Users of the 32-bit ptrace() ABI expect the full 32-bit ABI.  siginfo
      translation should check ptrace() ABI, not caller task ABI.
      
      This is an ABI change on SPARC.  Let's hope that no one relied on the
      old buggy ABI.
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c465217
    • A
      seccomp: check in_compat_syscall, not is_compat_task, in strict mode · 5c38065e
      Andy Lutomirski 提交于
      Seccomp wants to know the syscall bitness, not the caller task bitness,
      when it selects the syscall whitelist.
      
      As far as I know, this makes no difference on any architecture, so it's
      not a security problem.  (It generates identical code everywhere except
      sparc, and, on sparc, the syscall numbering is the same for both ABIs.)
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c38065e
    • T
      kernel/hung_task.c: use timeout diff when timeout is updated · b4aa14a6
      Tetsuo Handa 提交于
      When new timeout is written to /proc/sys/kernel/hung_task_timeout_secs,
      khungtaskd is interrupted and again sleeps for full timeout duration.
      
      This means that hang task will not be checked if new timeout is written
      periodically within old timeout duration and/or checking of hang task
      will be delayed for up to previous timeout duration.  Fix this by
      remembering last time khungtaskd checked hang task.
      
      This change will allow other watchdog tasks (if any) to share khungtaskd
      by sleeping for minimal timeout diff of all watchdog tasks.  Doing more
      watchdog tasks from khungtaskd will reduce the possibility of printk()
      collisions by multiple watchdog threads.
      Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Aaron Tomlin <atomlin@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b4aa14a6
    • P
      tracing: Record and show NMI state · 7e6867bf
      Peter Zijlstra 提交于
      The latency tracer format has a nice column to indicate IRQ state, but
      this is not able to tell us about NMI state.
      
      When tracing perf interrupt handlers (which often run in NMI context)
      it is very useful to see how the events nest.
      
      Link: http://lkml.kernel.org/r/20160318153022.105068893@infradead.orgSigned-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7e6867bf
    • S
      tracing: Fix trace_printk() to print when not using bprintk() · 3debb0a9
      Steven Rostedt (Red Hat) 提交于
      The trace_printk() code will allocate extra buffers if the compile detects
      that a trace_printk() is used. To do this, the format of the trace_printk()
      is saved to the __trace_printk_fmt section, and if that section is bigger
      than zero, the buffers are allocated (along with a message that this has
      happened).
      
      If trace_printk() uses a format that is not a constant, and thus something
      not guaranteed to be around when the print happens, the compiler optimizes
      the fmt out, as it is not used, and the __trace_printk_fmt section is not
      filled. This means the kernel will not allocate the special buffers needed
      for the trace_printk() and the trace_printk() will not write anything to the
      tracing buffer.
      
      Adding a "__used" to the variable in the __trace_printk_fmt section will
      keep it around, even though it is set to NULL. This will keep the string
      from being printed in the debugfs/tracing/printk_formats section as it is
      not needed.
      Reported-by: NVlastimil Babka <vbabka@suse.cz>
      Fixes: 07d777fe "tracing: Add percpu buffers for trace_printk()"
      Cc: stable@vger.kernel.org # v3.5+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      3debb0a9
  16. 21 3月, 2016 3 次提交