1. 15 9月, 2009 1 次提交
  2. 12 9月, 2009 1 次提交
    • M
      clocksource: Resolve cpu hotplug dead lock with TSC unstable, fix crash · f79e0258
      Martin Schwidefsky 提交于
      The watchdog timer is started after the watchdog clocksource
      and at least one watched clocksource have been registered. The
      clocksource work element watchdog_work is initialized just
      before the clocksource timer is started. This is too late for
      the clocksource_mark_unstable call from native_cpu_up. To fix
      this use a static initializer for watchdog_work.
      
      This resolves a boot crash reported by multiple people.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: John Stultz <johnstul@us.ibm.com>
      LKML-Reference: <20090911153305.3fe9a361@skybase>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f79e0258
  3. 31 8月, 2009 1 次提交
  4. 29 8月, 2009 1 次提交
    • T
      clocksource: Resolve cpu hotplug dead lock with TSC unstable · 7285dd7f
      Thomas Gleixner 提交于
      Martin Schwidefsky analyzed it:
      To register a clocksource the clocksource_mutex is acquired and if
      necessary timekeeping_notify is called to install the clocksource as
      the timekeeper clock. timekeeping_notify uses stop_machine which needs
      to take cpu_add_remove_lock mutex.
      Starting a new cpu is done with the cpu_add_remove_lock mutex held.
      native_cpu_up checks the tsc of the new cpu and if the tsc is no good
      clocksource_change_rating is called. Which needs the clocksource_mutex
      and the deadlock is complete.
      
      The solution is to replace the TSC via the clocksource watchdog
      mechanism. Mark the TSC as unstable and schedule the watchdog work so
      it gets removed in the watchdog thread context.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <new-submission>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: John Stultz <johnstul@us.ibm.com>
      7285dd7f
  5. 26 8月, 2009 1 次提交
  6. 25 8月, 2009 2 次提交
    • H
      timekeeping: Fix invalid getboottime() value · 36d47481
      Hiroshi Shimamoto 提交于
      Don't use timespec_add_safe() with wall_to_monotonic, because
      wall_to_monotonic has negative values which will cause overflow
      in timespec_add_safe(). That makes btime in /proc/stat invalid.
      Signed-off-by: NHiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Daniel Walker <dwalker@fifo99.com>
      LKML-Reference: <4A937FDE.4050506@ct.jp.nec.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      36d47481
    • P
      timekeeping: Fix up read_persistent_clock() breakage on sh · 0ceb4c3e
      Paul Mundt 提交于
      The recent commit "timekeeping: Increase granularity of
      read_persistent_clock()" introduced read_persistent_clock()
      rework which inadvertently broke the sh conversion:
      
      	arch/sh/kernel/time.c:45: error: passing argument 1 of 'rtc_sh_get_time' from incompatible pointer type
      	distcc[13470] ERROR: compile arch/sh/kernel/time.c on sprygo/32 failed
      	make[2]: *** [arch/sh/kernel/time.o] Error 1
      
      This trivial fix gets it working again.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      LKML-Reference: <20090824223239.GB20832@linux-sh.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0ceb4c3e
  7. 23 8月, 2009 1 次提交
  8. 22 8月, 2009 2 次提交
    • J
      time: Introduce CLOCK_REALTIME_COARSE · da15cfda
      john stultz 提交于
      After talking with some application writers who want very fast, but not
      fine-grained timestamps, I decided to try to implement new clock_ids
      to clock_gettime(): CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
      which returns the time at the last tick. This is very fast as we don't
      have to access any hardware (which can be very painful if you're using
      something like the acpi_pm clocksource), and we can even use the vdso
      clock_gettime() method to avoid the syscall. The only trade off is you
      only get low-res tick grained time resolution.
      
      This isn't a new idea, I know Ingo has a patch in the -rt tree that made
      the vsyscall gettimeofday() return coarse grained time when the
      vsyscall64 sysctrl was set to 2. However this affects all applications
      on a system.
      
      With this method, applications can choose the proper speed/granularity
      trade-off for themselves.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: nikolag@ca.ibm.com
      Cc: Darren Hart <dvhltc@us.ibm.com>
      Cc: arjan@infradead.org
      Cc: jonathan@jonmasters.org
      LKML-Reference: <1250734414.6897.5.camel@localhost.localdomain>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      da15cfda
    • T
      x86: Do not unregister PIT clocksource on PIT oneshot setup/shutdown · 8cab02dc
      Thomas Gleixner 提交于
      This basically reverts commit 1a0c009a (x86: unregister PIT
      clocksource when PIT is disabled) because the problem which was tried
      to address with that patch has been solved by commit 3f68535a
      (clocksource: sanity check sysfs clocksource changes).
      
      The problem addressed by the original patch is that PIT could be
      selected as clocksource after the system switched the PIT off or set
      the PIT into one shot mode which would result in complete timekeeping
      wreckage.
      
      Now with the sysfs sanity check in place PIT cannot be selected again
      when the system is in oneshot mode. The system will not switch to one
      shot mode as long as PIT is installed because PIT is not suitable for
      one shot.
      
      The shutdown case which happens when the lapic timer is installed is
      covered by the fact that init_pit_clocksource() is called after the
      lapic timer take over and then does not install the PIT clocksource
      at all.
      
      We should have done the sanity checks back then, but ...
      
      This also solves the locking problem which was reported vs. the
      clocksource rework.
      
      LKML-Reference: <new-submission>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: john stultz <johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      8cab02dc
  9. 19 8月, 2009 2 次提交
    • M
      clocksource: Avoid clocksource watchdog circular locking dependency · 01548f4d
      Martin Schwidefsky 提交于
      stop_machine from a multithreaded workqueue is not allowed because
      of a circular locking dependency between cpu_down and the workqueue
      execution. Use a kernel thread to do the clocksource downgrade.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: john stultz <johnstul@us.ibm.com>
      LKML-Reference: <20090818170942.3ab80c91@skybase>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      01548f4d
    • T
      clocksource: Protect the watchdog rating changes with clocksource_mutex · d0981a1b
      Thomas Gleixner 提交于
      Martin pointed out that commit 6ea41d2529 (clocksource: Call
      clocksource_change_rating() outside of watchdog_lock) has a
      theoretical reference count problem. The calls to
      clocksource_change_rating() are now done outside of the clocksource
      mutex and outside of the watchdog lock. A concurrent
      clocksource_unregister() could remove the clock.
      
      Split out the code which changes the rating from
      clocksource_change_rating() into __clocksource_change_rating().
      
      Protect the clocksource_watchdog_work() code sequence with the
      clocksource_mutex() and call __clocksource_change_rating().
      
      LKML-Reference: <alpine.LFD.2.00.0908171038420.2782@localhost.localdomain>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      d0981a1b
  10. 15 8月, 2009 16 次提交
  11. 14 8月, 2009 10 次提交
  12. 13 8月, 2009 2 次提交
    • P
      perf_counter: Report the cloning task as parent on perf_counter_fork() · 94d5d1b2
      Peter Zijlstra 提交于
      A bug in (9f498cc5: perf_counter: Full task tracing) makes
      profiling multi-threaded apps it go belly up.
      
      [ output as: (PID:TID):(PPID:PTID) ]
      
       # ./perf report -D | grep FORK
      0x4b0 [0x18]: PERF_EVENT_FORK: (3237:3237):(3236:3236)
      0xa10 [0x18]: PERF_EVENT_FORK: (3237:3238):(3236:3236)
      0xa70 [0x18]: PERF_EVENT_FORK: (3237:3239):(3236:3236)
      0xad0 [0x18]: PERF_EVENT_FORK: (3237:3240):(3236:3236)
      0xb18 [0x18]: PERF_EVENT_FORK: (3237:3241):(3236:3236)
      
      Shows us that the test (27d028de perf report: Update for the new
      FORK/EXIT events) in builtin-report.c:
      
              /*
               * A thread clone will have the same PID for both
               * parent and child.
               */
              if (thread == parent)
                      return 0;
      
      Will clearly fail.
      
      The problem is that perf_counter_fork() reports the actual
      parent, instead of the cloning thread.
      
      Fixing that (with the below patch), yields:
      
       # ./perf report -D | grep FORK
      0x4c8 [0x18]: PERF_EVENT_FORK: (1590:1590):(1589:1589)
      0xbd8 [0x18]: PERF_EVENT_FORK: (1590:1591):(1590:1590)
      0xc80 [0x18]: PERF_EVENT_FORK: (1590:1592):(1590:1590)
      0x3338 [0x18]: PERF_EVENT_FORK: (1590:1593):(1590:1590)
      0x66b0 [0x18]: PERF_EVENT_FORK: (1590:1594):(1590:1590)
      
      Which both makes more sense and doesn't confuse perf report
      anymore.
      Reported-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: paulus@samba.org
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      LKML-Reference: <1250172882.5241.62.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      94d5d1b2
    • P
      perf_counter: Fix an ipi-deadlock · 970892a9
      Peter Zijlstra 提交于
      perf_pending_counter() is called from IRQ context and will call
      perf_counter_disable(), however perf_counter_disable() uses
      smp_call_function_single() which doesn't fancy being used with
      IRQs disabled due to IPI deadlocks.
      
      Fix this by making it use the local __perf_counter_disable()
      call and teaching the counter_sched_out() code about pending
      disables as well.
      
      This should cover the case where a counter migrates before the
      pending queue gets processed.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey J Ashford <cjashfor@us.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      LKML-Reference: <20090813103655.244097721@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      970892a9