1. 14 9月, 2017 15 次提交
    • T
      watchdog/core: Further simplify sysctl handling · e8b62b2d
      Thomas Gleixner 提交于
      Use a single function to update sysctl changes. This is not a high
      frequency user space interface and it's root only.
      
      Preparatory patch to cleanup the sysctl variable handling.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194147.549114957@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e8b62b2d
    • T
      watchdog/core: Get rid of the thread teardown/setup dance · d57108d4
      Thomas Gleixner 提交于
      The lockup detector reconfiguration tears down all watchdog threads when
      the watchdog is disabled and sets them up again when its enabled.
      
      That's a pointless exercise. The watchdog threads are not consuming an
      insane amount of resources, so it's enough to set them up at init time and
      keep them in parked position when the watchdog is disabled and unpark them
      when it is reenabled. The smpboot thread infrastructure takes care of
      keeping the force parked threads in place even across cpu hotplug.
      
      Aside of that the code implements the park/unpark facility of smp hotplug
      threads on its own, which is even more pointless. We have functionality in
      the smpboot thread code to do so.
      
      Use the new thread management functions and get rid of the unholy mess.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194147.470370113@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d57108d4
    • T
      watchdog/core: Create new thread handling infrastructure · 2eb2527f
      Thomas Gleixner 提交于
      The lockup detector reconfiguration tears down all watchdog threads when
      the watchdog is disabled and sets them up again when its enabled.
      
      That's a pointless exercise. The watchdog threads are not consuming an
      insane amount of resources, so it's enough to set them up at init time and
      keep them in parked position when the watchdog is disabled and unpark them
      when it is reenabled. The smpboot thread infrastructure takes care of
      keeping the force parked threads in place even across cpu hotplug.
      
      Another horrible mechanism are the open coded park/unpark loops which are
      used for reconfiguration of the watchdog. The smpboot infrastructure allows
      exactly the same via smpboot_update_cpumask_thread_percpu(), which is cpu
      hotplug safe. Using that instead of the open coded loops allows to get rid
      of the hotplug locking mess in the watchdog code.
      
      Implement a clean infrastructure which allows to replace the open coded
      nonsense.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194147.377182587@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2eb2527f
    • T
      smpboot/threads, watchdog/core: Avoid runtime allocation · 0d85923c
      Thomas Gleixner 提交于
      smpboot_update_cpumask_threads_percpu() allocates a temporary cpumask at
      runtime. This is suboptimal because the call site needs more code size for
      proper error handling than a statically allocated temporary mask requires
      data size.
      
      Add static temporary cpumask. The function is globaly serialized, so no
      further protection required.
      
      Remove the half baken error handling in the watchdog code and get rid of
      the export as there are no in tree modular users of that function.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194147.297288838@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0d85923c
    • T
      watchdog/core: Split out cpumask write function · 05ba3de7
      Thomas Gleixner 提交于
      Split the write part of the cpumask proc handler out into a separate helper
      to avoid deep indentation. This also reduces the patch complexity in the
      following cleanups.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194147.218075991@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      05ba3de7
    • T
      watchdog/core: Clean up the #ifdef maze · 368a7e2c
      Thomas Gleixner 提交于
      The #ifdef maze in this file is horrible, group stuff at least a bit so one
      can figure out what belongs to what.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194147.139629546@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      368a7e2c
    • T
      watchdog/core: Clean up stub functions · 2b9d7f23
      Thomas Gleixner 提交于
      Having stub functions which take a full page is not helping the
      readablility of code.
      
      Condense them and move the doubled #ifdef variant into the SYSFS section.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194147.045545271@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2b9d7f23
    • T
      watchdog/core: Remove the park_in_progress obfuscation · 01f0a027
      Thomas Gleixner 提交于
      Commit:
      
        b94f5118 ("kernel/watchdog: prevent false hardlockup on overloaded system")
      
      tries to fix the following issue:
      
      proc_write()
         set_sample_period()    <--- New sample period becoms visible
      			  <----- Broken starts
         proc_watchdog_update()
           watchdog_enable_all_cpus()		watchdog_hrtimer_fn()
           update_watchdog_all_cpus()		   restart_timer(sample_period)
              watchdog_park_threads()
      
      					thread->park()
      					  disable_nmi()
      			  <----- Broken ends
      
      The reason why this is broken is that the update of the watchdog threshold
      becomes immediately effective and visible for the hrtimer function which
      uses that value to rearm the timer. But the NMI/perf side still uses the
      old value up to the point where it is disabled. If the rate has been
      lowered then the NMI can run fast enough to 'detect' a hard lockup because
      the timer has not fired due to the longer period.
      
      The patch 'fixed' this by adding a variable:
      
      proc_write()
         set_sample_period()
      					<----- Broken starts
         proc_watchdog_update()
           watchdog_enable_all_cpus()		watchdog_hrtimer_fn()
           update_watchdog_all_cpus()		   restart_timer(sample_period)
               watchdog_park_threads()
      	  park_in_progress = 1
      					<----- Broken ends
      				        nmi_watchdog()
      					  if (park_in_progress)
      					     return;
      
      The only effect of this variable was to make the window where the breakage
      can hit small enough that it was not longer observable in testing. From a
      correctness point of view it is a pointless bandaid which merily papers
      over the root cause: the unsychronized update of the variable.
      
      Looking deeper into the related code pathes unearthed similar problems in
      the watchdog_start()/stop() functions.
      
       watchdog_start()
      	perf_nmi_event_start()
      	hrtimer_start()
      
       watchdog_stop()
      	hrtimer_cancel()
      	perf_nmi_event_stop()
      
      In both cases the call order is wrong because if the tasks gets preempted
      or the VM gets scheduled out long enough after the first call, then there is
      a chance that the next NMI will see a stale hrtimer interrupt count and
      trigger a false positive hard lockup splat.
      
      Get rid of park_in_progress so the code can be gradually deobfuscated and
      pruned from several layers of duct tape papering over the root cause,
      which has been either ignored or not understood at all.
      
      Once this is removed the underlying problem will be fixed by rewriting the
      proc interface to do a proper synchronized update.
      
      Address the start/stop() ordering problem as well by reverting the call
      order, so this part is at least correct now.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1709052038270.2393@nanosSigned-off-by: NIngo Molnar <mingo@kernel.org>
      01f0a027
    • T
      watchdog/hardlockup/perf: Prevent CPU hotplug deadlock · 941154bd
      Thomas Gleixner 提交于
      The following deadlock is possible in the watchdog hotplug code:
      
        cpus_write_lock()
          ...
            takedown_cpu()
              smpboot_park_threads()
                smpboot_park_thread()
                  kthread_park()
                    ->park() := watchdog_disable()
                      watchdog_nmi_disable()
                        perf_event_release_kernel();
                          put_event()
                            _free_event()
                              ->destroy() := hw_perf_event_destroy()
                                x86_release_hardware()
                                  release_ds_buffers()
                                    get_online_cpus()
      
      when a per cpu watchdog perf event is destroyed which drops the last
      reference to the PMU hardware. The cleanup code there invokes
      get_online_cpus() which instantly deadlocks because the hotplug percpu
      rwsem is write locked.
      
      To solve this add a deferring mechanism:
      
        cpus_write_lock()
      			   kthread_park()
      			    watchdog_nmi_disable(deferred)
      			      perf_event_disable(event);
      			      move_event_to_deferred(event);
      			   ....
        cpus_write_unlock()
        cleaup_deferred_events()
          perf_event_release_kernel()
      
      This is still properly serialized against concurrent hotplug via the
      cpu_add_remove_lock, which is held by the task which initiated the hotplug
      event.
      
      This is also used to handle event destruction when the watchdog threads are
      parked via other mechanisms than CPU hotplug.
      Analyzed-by: NPeter Zijlstra <peterz@infradead.org>
      Reported-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194146.884469246@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      941154bd
    • T
      watchdog/hardlockup/perf: Remove broken self disable on failure · 20d853fd
      Thomas Gleixner 提交于
      The self disabling feature is broken vs. CPU hotplug locking:
      
      CPU 0			   CPU 1
      cpus_write_lock();
       cpu_up(1)
         wait_for_completion()
      			   ....
      			   unpark_watchdog()
      			   ->unpark()
      			     perf_event_create() <- fails
      			       watchdog_enable &= ~NMI_WATCHDOG;
      			   ....
      cpus_write_unlock();
      			   CPU 2
      cpus_write_lock()
       cpu_down(2)
         wait_for_completion()
      			   wakeup(watchdog);
      			     watchdog()
      			     if (!(watchdog_enable & NMI_WATCHDOG))
      				watchdog_nmi_disable()
      				  perf_event_disable()
      				  ....
      				  cpus_read_lock();
      
      			   stop_smpboot_threads()
      			     park_watchdog();
      			       wait_for_completion(watchdog->parked);
      
      Result: End of hotplug and instantaneous full lockup of the machine.
      
      There is a similar problem with disabling the watchdog via the user space
      interface as the sysctl function fiddles with watchdog_enable directly.
      
      It's very debatable whether this is required at all. If the watchdog works
      nicely on N CPUs and it fails to enable on the N + 1 CPU either during
      hotplug or because the user space interface disabled it via sysctl cpumask
      and then some perf user grabbed the counter which is then unavailable for
      the watchdog when the sysctl cpumask gets changed back.
      
      There is no real justification for this.
      
      One of the reasons WHY this is done is the utter stupidity of the init code
      of the perf NMI watchdog. Instead of checking upfront at boot whether PERF
      is available and functional at all, it just does this check at run time
      over and over when user space fiddles with the sysctl. That's broken beyond
      repair along with the idiotic error code dependent warn level printks and
      the even more silly printk rate limiting.
      
      If the init code checks whether perf works at boot time, then this mess can
      be more or less avoided completely. Perf does not come magically into life
      at runtime. Brain usage while coding is overrated.
      
      Remove the cruft and add a temporary safe guard which gets removed later.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194146.806708429@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      20d853fd
    • T
      watchdog/core: Mark hardlockup_detector_disable() __init · 7a355820
      Thomas Gleixner 提交于
      The function is only used by the KVM init code. Mark it __init to prevent
      creative abuse.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194146.727134632@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7a355820
    • T
      watchdog/core: Rename watchdog_proc_mutex · 946d1977
      Thomas Gleixner 提交于
      Following patches will use the mutex for other purposes as well. Rename it
      as it is not longer a proc specific thing.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194146.647714850@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      946d1977
    • T
      watchdog/core: Rework CPU hotplug locking · b7a34981
      Thomas Gleixner 提交于
      The watchdog proc interface causes extensive recursive locking of the CPU
      hotplug percpu rwsem, which is deadlock prone.
      
      Replace the get/put_online_cpus() pairs with cpu_hotplug_disable()/enable()
      calls for now. Later patches will remove that requirement completely.
      Reported-by: NBorislav Petkov <bp@alien8.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194146.568079057@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b7a34981
    • T
      watchdog/core: Remove broken suspend/resume interfaces · 5490125d
      Thomas Gleixner 提交于
      This interface has several issues:
      
       - It's causing recursive locking of the hotplug lock.
      
       - It's complete overkill to teardown all threads and then recreate them
      
      The same can be achieved with the simple hardlockup_detector_perf_stop /
      restart() interfaces. The abuse from the busy looping poweroff() loop of
      PARISC has been solved as well.
      
      Remove the cruft.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Link: http://lkml.kernel.org/r/20170912194146.487537732@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5490125d
    • T
      watchdog/core: Provide interface to stop from poweroff() · 6554fd8c
      Thomas Gleixner 提交于
      PARISC has a a busy looping power off routine. If the watchdog is enabled
      the watchdog timer will still fire, but the thread is not running, which
      causes the softlockup watchdog to trigger.
      
      Provide a interface which allows to turn the watchdog off.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NDon Zickus <dzickus@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Cc: linux-parisc@vger.kernel.org
      Link: http://lkml.kernel.org/r/20170912194146.327343752@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6554fd8c
  2. 18 8月, 2017 1 次提交
    • T
      kernel/watchdog: Prevent false positives with turbo modes · 7edaeb68
      Thomas Gleixner 提交于
      The hardlockup detector on x86 uses a performance counter based on unhalted
      CPU cycles and a periodic hrtimer. The hrtimer period is about 2/5 of the
      performance counter period, so the hrtimer should fire 2-3 times before the
      performance counter NMI fires. The NMI code checks whether the hrtimer
      fired since the last invocation. If not, it assumess a hard lockup.
      
      The calculation of those periods is based on the nominal CPU
      frequency. Turbo modes increase the CPU clock frequency and therefore
      shorten the period of the perf/NMI watchdog. With extreme Turbo-modes (3x
      nominal frequency) the perf/NMI period is shorter than the hrtimer period
      which leads to false positives.
      
      A simple fix would be to shorten the hrtimer period, but that comes with
      the side effect of more frequent hrtimer and softlockup thread wakeups,
      which is not desired.
      
      Implement a low pass filter, which checks the perf/NMI period against
      kernel time. If the perf/NMI fires before 4/5 of the watchdog period has
      elapsed then the event is ignored and postponed to the next perf/NMI.
      
      That solves the problem and avoids the overhead of shorter hrtimer periods
      and more frequent softlockup thread wakeups.
      
      Fixes: 58687acb ("lockup_detector: Combine nmi_watchdog and softlockup detector")
      Reported-and-tested-by: NKan Liang <Kan.liang@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: dzickus@redhat.com
      Cc: prarit@redhat.com
      Cc: ak@linux.intel.com
      Cc: babu.moger@oracle.com
      Cc: peterz@infradead.org
      Cc: eranian@google.com
      Cc: acme@redhat.com
      Cc: stable@vger.kernel.org
      Cc: atomlin@redhat.com
      Cc: akpm@linux-foundation.org
      Cc: torvalds@linux-foundation.org
      Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1708150931310.1886@nanos
      7edaeb68
  3. 15 7月, 2017 1 次提交
  4. 13 7月, 2017 2 次提交
  5. 02 3月, 2017 3 次提交
  6. 25 1月, 2017 1 次提交
  7. 15 12月, 2016 3 次提交
  8. 11 7月, 2016 1 次提交
    • I
      Revert "perf/x86/intel, watchdog: Switch NMI watchdog to ref cycles on x86" · 44530d58
      Ingo Molnar 提交于
      This reverts commit 2c95afc1.
      
      Stephane reported the following regression:
      
       > Since Andi added:
       >
       > commit 2c95afc1
       > Author: Andi Kleen <ak@linux.intel.com>
       > Date:   Thu Jun 9 06:14:38 2016 -0700
       >
       >    perf/x86/intel, watchdog: Switch NMI watchdog to ref cycles on x86
       >
       > $ perf stat -e ref-cycles ls
       >   <not counted> ....
       >
       > fails systematically because the ref-cycles is now used by the
       > watchdog and given this is a system-wide pinned event, it monopolizes
       > the fixed counter 2 which is the only counter able to measure this event.
      
      Since the next merge window is near, fix the regression for now
      by reverting the commit.
      Reported-by: NStephane Eranian <eranian@google.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      44530d58
  9. 14 6月, 2016 1 次提交
    • A
      perf/x86/intel, watchdog: Switch NMI watchdog to ref cycles on x86 · 2c95afc1
      Andi Kleen 提交于
      The NMI watchdog uses either the fixed cycles or a generic cycles
      counter. This causes a lot of conflicts with users of the PMU who want
      to run a full group including the cycles fixed counter, for example
      the --topdown support recently added to perf stat. The code needs to
      fall back to not use groups, which can cause measurement inaccuracy
      due to multiplexing errors.
      
      This patch switches the NMI watchdog to use reference cycles
      on Intel systems.  This is actually more accurate than cycles,
      because cycles can tick faster than the measured CPU Frequency
      due to Turbo mode.
      
      The ref cycles always tick at their frequency, or slower when
      the system is idling. That means the NMI watchdog can never
      expire too early, unlike with cycles.
      
      The reference cycles tick roughly at the frequency of the TSC,
      so the same period computation can be used.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: jolsa@kernel.org
      Link: http://lkml.kernel.org/r/1465478079-19993-1-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2c95afc1
  10. 18 3月, 2016 1 次提交
  11. 19 12月, 2015 2 次提交
    • H
      panic, x86: Allow CPUs to save registers even if looping in NMI context · 58c5661f
      Hidehiro Kawai 提交于
      Currently, kdump_nmi_shootdown_cpus(), a subroutine of crash_kexec(),
      sends an NMI IPI to CPUs which haven't called panic() to stop them,
      save their register information and do some cleanups for crash dumping.
      However, if such a CPU is infinitely looping in NMI context, we fail to
      save its register information into the crash dump.
      
      For example, this can happen when unknown NMIs are broadcast to all
      CPUs as follows:
      
        CPU 0                             CPU 1
        ===========================       ==========================
        receive an unknown NMI
        unknown_nmi_error()
          panic()                         receive an unknown NMI
            spin_trylock(&panic_lock)     unknown_nmi_error()
            crash_kexec()                   panic()
                                              spin_trylock(&panic_lock)
                                              panic_smp_self_stop()
                                                infinite loop
              kdump_nmi_shootdown_cpus()
                issue NMI IPI -----------> blocked until IRET
                                                infinite loop...
      
      Here, since CPU 1 is in NMI context, the second NMI from CPU 0 is
      blocked until CPU 1 executes IRET. However, CPU 1 never executes IRET,
      so the NMI is not handled and the callback function to save registers is
      never called.
      
      In practice, this can happen on some servers which broadcast NMIs to all
      CPUs when the NMI button is pushed.
      
      To save registers in this case, we need to:
      
        a) Return from NMI handler instead of looping infinitely
        or
        b) Call the callback function directly from the infinite loop
      
      Inherently, a) is risky because NMI is also used to prevent corrupted
      data from being propagated to devices.  So, we chose b).
      
      This patch does the following:
      
      1. Move the infinite looping of CPUs which haven't called panic() in NMI
         context (actually done by panic_smp_self_stop()) outside of panic() to
         enable us to refer pt_regs. Please note that panic_smp_self_stop() is
         still used for normal context.
      
      2. Call a callback of kdump_nmi_shootdown_cpus() directly to save
         registers and do some cleanups after setting waiting_for_crash_ipi which
         is used for counting down the number of CPUs which handled the callback
      Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Aaron Tomlin <atomlin@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Gobinda Charan Maji <gobinda.cemk07@gmail.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Javi Merino <javi.merino@arm.com>
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: kexec@lists.infradead.org
      Cc: linux-doc@vger.kernel.org
      Cc: lkml <linux-kernel@vger.kernel.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: Stefan Lippers-Hollmann <s.l-h@gmx.de>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Link: http://lkml.kernel.org/r/20151210014628.25437.75256.stgit@softrs
      [ Cleanup comments, fixup formatting. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      58c5661f
    • H
      panic, x86: Fix re-entrance problem due to panic on NMI · 1717f209
      Hidehiro Kawai 提交于
      If panic on NMI happens just after panic() on the same CPU, panic() is
      recursively called. Kernel stalls, as a result, after failing to acquire
      panic_lock.
      
      To avoid this problem, don't call panic() in NMI context if we've
      already entered panic().
      
      For that, introduce nmi_panic() macro to reduce code duplication. In
      the case of panic on NMI, don't return from NMI handlers if another CPU
      already panicked.
      Signed-off-by: NHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Aaron Tomlin <atomlin@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Gobinda Charan Maji <gobinda.cemk07@gmail.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Javi Merino <javi.merino@arm.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: kexec@lists.infradead.org
      Cc: linux-doc@vger.kernel.org
      Cc: lkml <linux-kernel@vger.kernel.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Link: http://lkml.kernel.org/r/20151210014626.25437.13302.stgit@softrs
      [ Cleanup comments, fixup formatting. ]
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1717f209
  12. 09 12月, 2015 2 次提交
    • T
      workqueue: implement lockup detector · 82607adc
      Tejun Heo 提交于
      Workqueue stalls can happen from a variety of usage bugs such as
      missing WQ_MEM_RECLAIM flag or concurrency managed work item
      indefinitely staying RUNNING.  These stalls can be extremely difficult
      to hunt down because the usual warning mechanisms can't detect
      workqueue stalls and the internal state is pretty opaque.
      
      To alleviate the situation, this patch implements workqueue lockup
      detector.  It periodically monitors all worker_pools periodically and,
      if any pool failed to make forward progress longer than the threshold
      duration, triggers warning and dumps workqueue state as follows.
      
       BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 31s!
       Showing busy workqueues and worker pools:
       workqueue events: flags=0x0
         pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=17/256
           pending: monkey_wrench_fn, e1000_watchdog, cache_reap, vmstat_shepherd, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, cgroup_release_agent
       workqueue events_power_efficient: flags=0x80
         pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256
           pending: check_lifetime, neigh_periodic_work
       workqueue cgroup_pidlist_destroy: flags=0x0
         pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/1
           pending: cgroup_pidlist_destroy_work_fn
       ...
      
      The detection mechanism is controller through kernel parameter
      workqueue.watchdog_thresh and can be updated at runtime through the
      sysfs module parameter file.
      
      v2: Decoupled from softlockup control knobs.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NDon Zickus <dzickus@redhat.com>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Chris Mason <clm@fb.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      82607adc
    • T
      watchdog: introduce touch_softlockup_watchdog_sched() · 03e0d461
      Tejun Heo 提交于
      touch_softlockup_watchdog() is used to tell watchdog that scheduler
      stall is expected.  One group of usage is from paths where the task
      may not be able to yield for a long time such as performing slow PIO
      to finicky device and coming out of suspend.  The other is to account
      for scheduler and timer going idle.
      
      For scheduler softlockup detection, there's no reason to distinguish
      the two cases; however, workqueue lockup detector is planned and it
      can use the same signals from the former group while the latter would
      spuriously prevent detection.  This patch introduces a new function
      touch_softlockup_watchdog_sched() and convert the latter group to call
      it instead.  For now, it just calls touch_softlockup_watchdog() and
      there's no functional difference.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      03e0d461
  13. 06 11月, 2015 7 次提交