1. 29 8月, 2009 3 次提交
  2. 26 8月, 2009 1 次提交
  3. 25 8月, 2009 2 次提交
    • H
      timekeeping: Fix invalid getboottime() value · 36d47481
      Hiroshi Shimamoto 提交于
      Don't use timespec_add_safe() with wall_to_monotonic, because
      wall_to_monotonic has negative values which will cause overflow
      in timespec_add_safe(). That makes btime in /proc/stat invalid.
      Signed-off-by: NHiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Daniel Walker <dwalker@fifo99.com>
      LKML-Reference: <4A937FDE.4050506@ct.jp.nec.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      36d47481
    • P
      timekeeping: Fix up read_persistent_clock() breakage on sh · 0ceb4c3e
      Paul Mundt 提交于
      The recent commit "timekeeping: Increase granularity of
      read_persistent_clock()" introduced read_persistent_clock()
      rework which inadvertently broke the sh conversion:
      
      	arch/sh/kernel/time.c:45: error: passing argument 1 of 'rtc_sh_get_time' from incompatible pointer type
      	distcc[13470] ERROR: compile arch/sh/kernel/time.c on sprygo/32 failed
      	make[2]: *** [arch/sh/kernel/time.o] Error 1
      
      This trivial fix gets it working again.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      LKML-Reference: <20090824223239.GB20832@linux-sh.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0ceb4c3e
  4. 23 8月, 2009 1 次提交
  5. 22 8月, 2009 2 次提交
    • J
      time: Introduce CLOCK_REALTIME_COARSE · da15cfda
      john stultz 提交于
      After talking with some application writers who want very fast, but not
      fine-grained timestamps, I decided to try to implement new clock_ids
      to clock_gettime(): CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
      which returns the time at the last tick. This is very fast as we don't
      have to access any hardware (which can be very painful if you're using
      something like the acpi_pm clocksource), and we can even use the vdso
      clock_gettime() method to avoid the syscall. The only trade off is you
      only get low-res tick grained time resolution.
      
      This isn't a new idea, I know Ingo has a patch in the -rt tree that made
      the vsyscall gettimeofday() return coarse grained time when the
      vsyscall64 sysctrl was set to 2. However this affects all applications
      on a system.
      
      With this method, applications can choose the proper speed/granularity
      trade-off for themselves.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: nikolag@ca.ibm.com
      Cc: Darren Hart <dvhltc@us.ibm.com>
      Cc: arjan@infradead.org
      Cc: jonathan@jonmasters.org
      LKML-Reference: <1250734414.6897.5.camel@localhost.localdomain>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      da15cfda
    • T
      x86: Do not unregister PIT clocksource on PIT oneshot setup/shutdown · 8cab02dc
      Thomas Gleixner 提交于
      This basically reverts commit 1a0c009a (x86: unregister PIT
      clocksource when PIT is disabled) because the problem which was tried
      to address with that patch has been solved by commit 3f68535a
      (clocksource: sanity check sysfs clocksource changes).
      
      The problem addressed by the original patch is that PIT could be
      selected as clocksource after the system switched the PIT off or set
      the PIT into one shot mode which would result in complete timekeeping
      wreckage.
      
      Now with the sysfs sanity check in place PIT cannot be selected again
      when the system is in oneshot mode. The system will not switch to one
      shot mode as long as PIT is installed because PIT is not suitable for
      one shot.
      
      The shutdown case which happens when the lapic timer is installed is
      covered by the fact that init_pit_clocksource() is called after the
      lapic timer take over and then does not install the PIT clocksource
      at all.
      
      We should have done the sanity checks back then, but ...
      
      This also solves the locking problem which was reported vs. the
      clocksource rework.
      
      LKML-Reference: <new-submission>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: john stultz <johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      8cab02dc
  6. 19 8月, 2009 2 次提交
    • M
      clocksource: Avoid clocksource watchdog circular locking dependency · 01548f4d
      Martin Schwidefsky 提交于
      stop_machine from a multithreaded workqueue is not allowed because
      of a circular locking dependency between cpu_down and the workqueue
      execution. Use a kernel thread to do the clocksource downgrade.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: john stultz <johnstul@us.ibm.com>
      LKML-Reference: <20090818170942.3ab80c91@skybase>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      01548f4d
    • T
      clocksource: Protect the watchdog rating changes with clocksource_mutex · d0981a1b
      Thomas Gleixner 提交于
      Martin pointed out that commit 6ea41d2529 (clocksource: Call
      clocksource_change_rating() outside of watchdog_lock) has a
      theoretical reference count problem. The calls to
      clocksource_change_rating() are now done outside of the clocksource
      mutex and outside of the watchdog lock. A concurrent
      clocksource_unregister() could remove the clock.
      
      Split out the code which changes the rating from
      clocksource_change_rating() into __clocksource_change_rating().
      
      Protect the clocksource_watchdog_work() code sequence with the
      clocksource_mutex() and call __clocksource_change_rating().
      
      LKML-Reference: <alpine.LFD.2.00.0908171038420.2782@localhost.localdomain>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      d0981a1b
  7. 15 8月, 2009 16 次提交
  8. 14 8月, 2009 10 次提交
  9. 13 8月, 2009 3 次提交
    • P
      perf_counter: Report the cloning task as parent on perf_counter_fork() · 94d5d1b2
      Peter Zijlstra 提交于
      A bug in (9f498cc5: perf_counter: Full task tracing) makes
      profiling multi-threaded apps it go belly up.
      
      [ output as: (PID:TID):(PPID:PTID) ]
      
       # ./perf report -D | grep FORK
      0x4b0 [0x18]: PERF_EVENT_FORK: (3237:3237):(3236:3236)
      0xa10 [0x18]: PERF_EVENT_FORK: (3237:3238):(3236:3236)
      0xa70 [0x18]: PERF_EVENT_FORK: (3237:3239):(3236:3236)
      0xad0 [0x18]: PERF_EVENT_FORK: (3237:3240):(3236:3236)
      0xb18 [0x18]: PERF_EVENT_FORK: (3237:3241):(3236:3236)
      
      Shows us that the test (27d028de perf report: Update for the new
      FORK/EXIT events) in builtin-report.c:
      
              /*
               * A thread clone will have the same PID for both
               * parent and child.
               */
              if (thread == parent)
                      return 0;
      
      Will clearly fail.
      
      The problem is that perf_counter_fork() reports the actual
      parent, instead of the cloning thread.
      
      Fixing that (with the below patch), yields:
      
       # ./perf report -D | grep FORK
      0x4c8 [0x18]: PERF_EVENT_FORK: (1590:1590):(1589:1589)
      0xbd8 [0x18]: PERF_EVENT_FORK: (1590:1591):(1590:1590)
      0xc80 [0x18]: PERF_EVENT_FORK: (1590:1592):(1590:1590)
      0x3338 [0x18]: PERF_EVENT_FORK: (1590:1593):(1590:1590)
      0x66b0 [0x18]: PERF_EVENT_FORK: (1590:1594):(1590:1590)
      
      Which both makes more sense and doesn't confuse perf report
      anymore.
      Reported-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: paulus@samba.org
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      LKML-Reference: <1250172882.5241.62.camel@twins>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      94d5d1b2
    • P
      perf_counter: Fix an ipi-deadlock · 970892a9
      Peter Zijlstra 提交于
      perf_pending_counter() is called from IRQ context and will call
      perf_counter_disable(), however perf_counter_disable() uses
      smp_call_function_single() which doesn't fancy being used with
      IRQs disabled due to IPI deadlocks.
      
      Fix this by making it use the local __perf_counter_disable()
      call and teaching the counter_sched_out() code about pending
      disables as well.
      
      This should cover the case where a counter migrates before the
      pending queue gets processed.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey J Ashford <cjashfor@us.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      LKML-Reference: <20090813103655.244097721@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      970892a9
    • P
      perf: Rework/fix the whole read vs group stuff · 3dab77fb
      Peter Zijlstra 提交于
      Replace PERF_SAMPLE_GROUP with PERF_SAMPLE_READ and introduce
      PERF_FORMAT_GROUP to deal with group reads in a more generic
      way.
      
      This allows you to get group reads out of read() as well.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey J Ashford <cjashfor@us.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      LKML-Reference: <20090813103655.117411814@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3dab77fb